I0217 22:31:45.882935 6 e2e.go:243] Starting e2e run "5d432f22-6cfa-4901-8986-afee7c80f2e1" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1613601104 - Will randomize all specs Will run 215 of 4413 specs Feb 17 22:31:46.063: INFO: >>> kubeConfig: /root/.kube/config Feb 17 22:31:46.067: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Feb 17 22:31:46.089: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Feb 17 22:31:46.120: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Feb 17 22:31:46.120: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Feb 17 22:31:46.120: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Feb 17 22:31:46.126: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Feb 17 22:31:46.126: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Feb 17 22:31:46.126: INFO: e2e test version: v1.15.12 Feb 17 22:31:46.128: INFO: kube-apiserver version: v1.15.12 SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 17 22:31:46.128: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected Feb 17 22:31:46.990: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-a394b1bf-0bb5-4973-bcb6-5aad5b86bd6e STEP: Creating a pod to test consume configMaps Feb 17 22:31:47.269: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f5e35873-3180-4bee-ba8d-7af704a7fa5b" in namespace "projected-2831" to be "success or failure" Feb 17 22:31:47.532: INFO: Pod "pod-projected-configmaps-f5e35873-3180-4bee-ba8d-7af704a7fa5b": Phase="Pending", Reason="", readiness=false. Elapsed: 262.723213ms Feb 17 22:31:49.682: INFO: Pod "pod-projected-configmaps-f5e35873-3180-4bee-ba8d-7af704a7fa5b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.413211748s Feb 17 22:31:51.718: INFO: Pod "pod-projected-configmaps-f5e35873-3180-4bee-ba8d-7af704a7fa5b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.448531368s Feb 17 22:31:53.790: INFO: Pod "pod-projected-configmaps-f5e35873-3180-4bee-ba8d-7af704a7fa5b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.520857404s Feb 17 22:31:55.793: INFO: Pod "pod-projected-configmaps-f5e35873-3180-4bee-ba8d-7af704a7fa5b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.524220733s Feb 17 22:31:58.030: INFO: Pod "pod-projected-configmaps-f5e35873-3180-4bee-ba8d-7af704a7fa5b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.760934907s Feb 17 22:32:00.719: INFO: Pod "pod-projected-configmaps-f5e35873-3180-4bee-ba8d-7af704a7fa5b": Phase="Running", Reason="", readiness=true. Elapsed: 13.449977653s Feb 17 22:32:02.722: INFO: Pod "pod-projected-configmaps-f5e35873-3180-4bee-ba8d-7af704a7fa5b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.453175745s STEP: Saw pod success Feb 17 22:32:02.722: INFO: Pod "pod-projected-configmaps-f5e35873-3180-4bee-ba8d-7af704a7fa5b" satisfied condition "success or failure" Feb 17 22:32:02.725: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-f5e35873-3180-4bee-ba8d-7af704a7fa5b container projected-configmap-volume-test: STEP: delete the pod Feb 17 22:32:03.163: INFO: Waiting for pod pod-projected-configmaps-f5e35873-3180-4bee-ba8d-7af704a7fa5b to disappear Feb 17 22:32:03.209: INFO: Pod pod-projected-configmaps-f5e35873-3180-4bee-ba8d-7af704a7fa5b no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 17 22:32:03.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2831" for this suite. Feb 17 22:32:15.382: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 22:32:15.458: INFO: namespace projected-2831 deletion completed in 12.247088607s • [SLOW TEST:29.330 seconds] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 17 22:32:15.459: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create and stop a working application [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating all guestbook components Feb 17 22:32:16.037: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Feb 17 22:32:16.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8295' Feb 17 22:33:13.341: INFO: stderr: "" Feb 17 22:33:13.341: INFO: stdout: "service/redis-slave created\n" Feb 17 22:33:13.341: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Feb 17 22:33:13.341: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8295' Feb 17 22:33:20.864: INFO: stderr: "" Feb 17 22:33:20.864: INFO: stdout: "service/redis-master created\n" Feb 17 22:33:20.864: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Feb 17 22:33:20.864: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8295' Feb 17 22:33:31.653: INFO: stderr: "" Feb 17 22:33:31.653: INFO: stdout: "service/frontend created\n" Feb 17 22:33:31.653: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Feb 17 22:33:31.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8295' Feb 17 22:33:34.987: INFO: stderr: "" Feb 17 22:33:34.987: INFO: stdout: "deployment.apps/frontend created\n" Feb 17 22:33:34.988: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-master spec: replicas: 1 selector: matchLabels: app: redis role: master tier: backend template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Feb 17 22:33:34.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8295' Feb 17 22:33:38.223: INFO: stderr: "" Feb 17 22:33:38.223: INFO: stdout: "deployment.apps/redis-master created\n" Feb 17 22:33:38.224: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 selector: matchLabels: app: redis role: slave tier: backend template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Feb 17 22:33:38.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8295' Feb 17 22:33:44.148: INFO: stderr: "" Feb 17 22:33:44.148: INFO: stdout: "deployment.apps/redis-slave created\n" STEP: validating guestbook app Feb 17 22:33:44.148: INFO: Waiting for all frontend pods to be Running. Feb 17 22:37:09.204: INFO: Waiting for frontend to serve content. Feb 17 22:37:13.944: INFO: Failed to get response from guestbook. err: the server is currently unable to handle the request (get services frontend), response: Feb 17 22:37:20.278: INFO: Failed to get response from guestbook. err: the server is currently unable to handle the request (get services frontend), response: Feb 17 22:37:26.532: INFO: Failed to get response from guestbook. err: , response:
Fatal error: Uncaught exception 'Predis\Connection\ConnectionException' with message 'Connection refused [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:155 Stack trace: #0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(128): Predis\Connection\AbstractConnection->onConnectionError('Connection refu...', 111) #1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(178): Predis\Connection\StreamConnection->createStreamSocket(Object(Predis\Connection\Parameters), 'tcp://redis-sla...', 4) #2 /usr/local/lib/php/Predis/Connection/StreamConnection.php(100): Predis\Connection\StreamConnection->tcpStreamInitializer(Object(Predis\Connection\Parameters)) #3 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(81): Predis\Connection\StreamConnection->createResource() #4 /usr/local/lib/php/Predis/Connection/StreamConnection.php(258): Predis\Connection\AbstractConnection->connect() #5 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(180): Predis\Connection\Stream in /usr/local/lib/php/Predis/Connection/AbstractConnection.php on line 155
Feb 17 22:37:34.997: INFO: Failed to get response from guestbook. err: , response:
Fatal error: Uncaught exception 'Predis\Connection\ConnectionException' with message 'Connection refused [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:155 Stack trace: #0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(128): Predis\Connection\AbstractConnection->onConnectionError('Connection refu...', 111) #1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(178): Predis\Connection\StreamConnection->createStreamSocket(Object(Predis\Connection\Parameters), 'tcp://redis-sla...', 4) #2 /usr/local/lib/php/Predis/Connection/StreamConnection.php(100): Predis\Connection\StreamConnection->tcpStreamInitializer(Object(Predis\Connection\Parameters)) #3 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(81): Predis\Connection\StreamConnection->createResource() #4 /usr/local/lib/php/Predis/Connection/StreamConnection.php(258): Predis\Connection\AbstractConnection->connect() #5 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(180): Predis\Connection\Stream in /usr/local/lib/php/Predis/Connection/AbstractConnection.php on line 155
Feb 17 22:37:41.023: INFO: Failed to get response from guestbook. err: , response:
Fatal error: Uncaught exception 'Predis\Connection\ConnectionException' with message 'Connection refused [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:155 Stack trace: #0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(128): Predis\Connection\AbstractConnection->onConnectionError('Connection refu...', 111) #1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(178): Predis\Connection\StreamConnection->createStreamSocket(Object(Predis\Connection\Parameters), 'tcp://redis-sla...', 4) #2 /usr/local/lib/php/Predis/Connection/StreamConnection.php(100): Predis\Connection\StreamConnection->tcpStreamInitializer(Object(Predis\Connection\Parameters)) #3 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(81): Predis\Connection\StreamConnection->createResource() #4 /usr/local/lib/php/Predis/Connection/StreamConnection.php(258): Predis\Connection\AbstractConnection->connect() #5 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(180): Predis\Connection\Stream in /usr/local/lib/php/Predis/Connection/AbstractConnection.php on line 155
Feb 17 22:37:47.038: INFO: Failed to get response from guestbook. err: , response:
Fatal error: Uncaught exception 'Predis\Connection\ConnectionException' with message 'Connection refused [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:155 Stack trace: #0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(128): Predis\Connection\AbstractConnection->onConnectionError('Connection refu...', 111) #1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(178): Predis\Connection\StreamConnection->createStreamSocket(Object(Predis\Connection\Parameters), 'tcp://redis-sla...', 4) #2 /usr/local/lib/php/Predis/Connection/StreamConnection.php(100): Predis\Connection\StreamConnection->tcpStreamInitializer(Object(Predis\Connection\Parameters)) #3 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(81): Predis\Connection\StreamConnection->createResource() #4 /usr/local/lib/php/Predis/Connection/StreamConnection.php(258): Predis\Connection\AbstractConnection->connect() #5 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(180): Predis\Connection\Stream in /usr/local/lib/php/Predis/Connection/AbstractConnection.php on line 155
Feb 17 22:37:53.059: INFO: Failed to get response from guestbook. err: , response:
Fatal error: Uncaught exception 'Predis\Connection\ConnectionException' with message 'Connection refused [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:155 Stack trace: #0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(128): Predis\Connection\AbstractConnection->onConnectionError('Connection refu...', 111) #1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(178): Predis\Connection\StreamConnection->createStreamSocket(Object(Predis\Connection\Parameters), 'tcp://redis-sla...', 4) #2 /usr/local/lib/php/Predis/Connection/StreamConnection.php(100): Predis\Connection\StreamConnection->tcpStreamInitializer(Object(Predis\Connection\Parameters)) #3 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(81): Predis\Connection\StreamConnection->createResource() #4 /usr/local/lib/php/Predis/Connection/StreamConnection.php(258): Predis\Connection\AbstractConnection->connect() #5 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(180): Predis\Connection\Stream in /usr/local/lib/php/Predis/Connection/AbstractConnection.php on line 155
Feb 17 22:37:59.107: INFO: Failed to get response from guestbook. err: , response:
Fatal error: Uncaught exception 'Predis\Connection\ConnectionException' with message 'Connection refused [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:155 Stack trace: #0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(128): Predis\Connection\AbstractConnection->onConnectionError('Connection refu...', 111) #1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(178): Predis\Connection\StreamConnection->createStreamSocket(Object(Predis\Connection\Parameters), 'tcp://redis-sla...', 4) #2 /usr/local/lib/php/Predis/Connection/StreamConnection.php(100): Predis\Connection\StreamConnection->tcpStreamInitializer(Object(Predis\Connection\Parameters)) #3 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(81): Predis\Connection\StreamConnection->createResource() #4 /usr/local/lib/php/Predis/Connection/StreamConnection.php(258): Predis\Connection\AbstractConnection->connect() #5 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(180): Predis\Connection\Stream in /usr/local/lib/php/Predis/Connection/AbstractConnection.php on line 155
Feb 17 22:38:08.131: INFO: Failed to get response from guestbook. err: , response:
Fatal error: Uncaught exception 'Predis\Connection\ConnectionException' with message 'Connection refused [tcp://redis-slave:6379]' in /usr/local/lib/php/Predis/Connection/AbstractConnection.php:155 Stack trace: #0 /usr/local/lib/php/Predis/Connection/StreamConnection.php(128): Predis\Connection\AbstractConnection->onConnectionError('Connection refu...', 111) #1 /usr/local/lib/php/Predis/Connection/StreamConnection.php(178): Predis\Connection\StreamConnection->createStreamSocket(Object(Predis\Connection\Parameters), 'tcp://redis-sla...', 4) #2 /usr/local/lib/php/Predis/Connection/StreamConnection.php(100): Predis\Connection\StreamConnection->tcpStreamInitializer(Object(Predis\Connection\Parameters)) #3 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(81): Predis\Connection\StreamConnection->createResource() #4 /usr/local/lib/php/Predis/Connection/StreamConnection.php(258): Predis\Connection\AbstractConnection->connect() #5 /usr/local/lib/php/Predis/Connection/AbstractConnection.php(180): Predis\Connection\Stream in /usr/local/lib/php/Predis/Connection/AbstractConnection.php on line 155
Feb 17 22:38:13.169: INFO: Trying to add a new entry to the guestbook. Feb 17 22:38:13.263: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Feb 17 22:38:13.459: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8295' Feb 17 22:38:14.620: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 17 22:38:14.620: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Feb 17 22:38:14.621: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8295' Feb 17 22:38:17.719: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 17 22:38:17.719: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Feb 17 22:38:17.720: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8295' Feb 17 22:38:22.436: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 17 22:38:22.436: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Feb 17 22:38:22.437: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8295' Feb 17 22:38:22.728: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 17 22:38:22.728: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Feb 17 22:38:22.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8295' Feb 17 22:38:23.404: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 17 22:38:23.404: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n" STEP: using delete to clean up resources Feb 17 22:38:23.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8295' Feb 17 22:38:25.134: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 17 22:38:25.134: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 17 22:38:25.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8295" for this suite. Feb 17 22:40:16.628: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 22:40:16.956: INFO: namespace kubectl-8295 deletion completed in 1m51.604427509s • [SLOW TEST:481.497 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Guestbook application /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a working application [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 17 22:40:16.956: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 17 22:40:18.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6851" for this suite. Feb 17 22:40:30.474: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 22:40:30.533: INFO: namespace kubelet-test-6851 deletion completed in 12.128957318s • [SLOW TEST:13.576 seconds] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 17 22:40:30.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6597.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-6597.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6597.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-6597.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6597.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-6597.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6597.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-6597.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6597.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-6597.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6597.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-6597.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6597.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 146.76.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.76.146_udp@PTR;check="$$(dig +tcp +noall +answer +search 146.76.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.76.146_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6597.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-6597.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6597.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-6597.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6597.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-6597.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6597.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-6597.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6597.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-6597.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6597.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-6597.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6597.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 146.76.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.76.146_udp@PTR;check="$$(dig +tcp +noall +answer +search 146.76.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.76.146_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 17 22:41:43.061: INFO: Unable to read wheezy_udp@dns-test-service.dns-6597.svc.cluster.local from pod dns-6597/dns-test-b3241596-5663-491d-aed3-62eb88dce33b: the server could not find the requested resource (get pods dns-test-b3241596-5663-491d-aed3-62eb88dce33b) Feb 17 22:41:43.252: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6597.svc.cluster.local from pod dns-6597/dns-test-b3241596-5663-491d-aed3-62eb88dce33b: the server could not find the requested resource (get pods dns-test-b3241596-5663-491d-aed3-62eb88dce33b) Feb 17 22:41:43.462: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6597.svc.cluster.local from pod dns-6597/dns-test-b3241596-5663-491d-aed3-62eb88dce33b: the server could not find the requested resource (get pods dns-test-b3241596-5663-491d-aed3-62eb88dce33b) Feb 17 22:41:43.466: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6597.svc.cluster.local from pod dns-6597/dns-test-b3241596-5663-491d-aed3-62eb88dce33b: the server could not find the requested resource (get pods dns-test-b3241596-5663-491d-aed3-62eb88dce33b) Feb 17 22:41:44.434: INFO: Unable to read jessie_udp@dns-test-service.dns-6597.svc.cluster.local from pod dns-6597/dns-test-b3241596-5663-491d-aed3-62eb88dce33b: the server could not find the requested resource (get pods dns-test-b3241596-5663-491d-aed3-62eb88dce33b) Feb 17 22:41:44.437: INFO: Unable to read jessie_tcp@dns-test-service.dns-6597.svc.cluster.local from pod dns-6597/dns-test-b3241596-5663-491d-aed3-62eb88dce33b: the server could not find the requested resource (get pods dns-test-b3241596-5663-491d-aed3-62eb88dce33b) Feb 17 22:41:44.439: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6597.svc.cluster.local from pod dns-6597/dns-test-b3241596-5663-491d-aed3-62eb88dce33b: the server could not find the requested resource (get pods dns-test-b3241596-5663-491d-aed3-62eb88dce33b) Feb 17 22:41:44.443: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6597.svc.cluster.local from pod dns-6597/dns-test-b3241596-5663-491d-aed3-62eb88dce33b: the server could not find the requested resource (get pods dns-test-b3241596-5663-491d-aed3-62eb88dce33b) Feb 17 22:41:53.169: INFO: Lookups using dns-6597/dns-test-b3241596-5663-491d-aed3-62eb88dce33b failed for: [wheezy_udp@dns-test-service.dns-6597.svc.cluster.local wheezy_tcp@dns-test-service.dns-6597.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6597.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6597.svc.cluster.local jessie_udp@dns-test-service.dns-6597.svc.cluster.local jessie_tcp@dns-test-service.dns-6597.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6597.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6597.svc.cluster.local] Feb 17 22:41:58.937: INFO: Unable to read wheezy_udp@dns-test-service.dns-6597.svc.cluster.local from pod dns-6597/dns-test-b3241596-5663-491d-aed3-62eb88dce33b: the server could not find the requested resource (get pods dns-test-b3241596-5663-491d-aed3-62eb88dce33b) Feb 17 22:41:59.770: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6597.svc.cluster.local from pod dns-6597/dns-test-b3241596-5663-491d-aed3-62eb88dce33b: the server could not find the requested resource (get pods dns-test-b3241596-5663-491d-aed3-62eb88dce33b) Feb 17 22:41:59.774: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6597.svc.cluster.local from pod dns-6597/dns-test-b3241596-5663-491d-aed3-62eb88dce33b: the server could not find the requested resource (get pods dns-test-b3241596-5663-491d-aed3-62eb88dce33b) Feb 17 22:42:02.872: INFO: Lookups using dns-6597/dns-test-b3241596-5663-491d-aed3-62eb88dce33b failed for: [wheezy_udp@dns-test-service.dns-6597.svc.cluster.local wheezy_tcp@dns-test-service.dns-6597.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6597.svc.cluster.local] Feb 17 22:42:03.281: INFO: DNS probes using dns-6597/dns-test-b3241596-5663-491d-aed3-62eb88dce33b succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 17 22:42:18.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6597" for this suite. Feb 17 22:42:34.024: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 22:42:34.217: INFO: namespace dns-6597 deletion completed in 14.430143476s • [SLOW TEST:123.683 seconds] [sig-network] DNS /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 17 22:42:34.218: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-407 [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating stateful set ss in namespace statefulset-407 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-407 Feb 17 22:42:35.367: INFO: Found 0 stateful pods, waiting for 1 Feb 17 22:42:45.864: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Feb 17 22:42:58.027: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Feb 17 22:43:05.372: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Feb 17 22:43:16.775: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Feb 17 22:43:27.013: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Feb 17 22:43:27.016: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-407 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 17 22:44:01.161: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" Feb 17 22:44:01.161: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 17 22:44:01.161: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 17 22:44:01.307: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Feb 17 22:44:11.379: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Feb 17 22:44:11.379: INFO: Waiting for statefulset status.replicas updated to 0 Feb 17 22:44:14.458: INFO: POD NODE PHASE GRACE CONDITIONS Feb 17 22:44:14.458: INFO: ss-0 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-17 22:42:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-17 22:44:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-17 22:44:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-17 22:42:37 +0000 UTC }] Feb 17 22:44:14.458: INFO: Feb 17 22:44:14.458: INFO: StatefulSet ss has not reached scale 3, at 1 Feb 17 22:44:15.523: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.053640955s Feb 17 22:44:16.649: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.988413559s Feb 17 22:44:17.732: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.862571313s Feb 17 22:44:19.476: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.779057373s Feb 17 22:44:20.865: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.03493164s Feb 17 22:44:22.057: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.64633889s Feb 17 22:44:23.272: INFO: Verifying statefulset ss doesn't scale past 3 for another 453.97109ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-407 Feb 17 22:44:24.577: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-407 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 22:44:24.918: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n" Feb 17 22:44:24.918: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 17 22:44:24.918: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 17 22:44:24.918: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-407 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 22:44:25.318: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" Feb 17 22:44:25.318: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 17 22:44:25.318: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 17 22:44:25.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-407 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 22:44:29.216: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" Feb 17 22:44:29.216: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 17 22:44:29.216: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 17 22:44:29.679: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Feb 17 22:44:29.679: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=false Feb 17 22:44:40.399: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Feb 17 22:44:40.399: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Feb 17 22:44:40.399: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Feb 17 22:44:40.402: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-407 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 17 22:44:40.642: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" Feb 17 22:44:40.642: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 17 22:44:40.642: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 17 22:44:40.642: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-407 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 17 22:44:40.983: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" Feb 17 22:44:40.983: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 17 22:44:40.983: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 17 22:44:40.983: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-407 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 17 22:44:41.275: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n" Feb 17 22:44:41.275: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 17 22:44:41.275: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 17 22:44:41.275: INFO: Waiting for statefulset status.replicas updated to 0 Feb 17 22:44:41.379: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Feb 17 22:44:53.487: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Feb 17 22:44:53.488: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Feb 17 22:44:53.488: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Feb 17 22:44:57.706: INFO: POD NODE PHASE GRACE CONDITIONS Feb 17 22:44:57.706: INFO: ss-0 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-17 22:42:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-17 22:44:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-17 22:44:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-17 22:42:37 +0000 UTC }] Feb 17 22:44:57.706: INFO: ss-1 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-17 22:44:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-17 22:44:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-17 22:44:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-17 22:44:14 +0000 UTC }] Feb 17 22:44:57.706: INFO: ss-2 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-17 22:44:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-17 22:44:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-17 22:44:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-17 22:44:14 +0000 UTC }] Feb 17 22:44:57.706: INFO: Feb 17 22:44:57.706: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 17 22:45:00.629: INFO: POD NODE PHASE GRACE CONDITIONS Feb 17 22:45:00.629: INFO: ss-0 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-17 22:42:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-17 22:44:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-17 22:44:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-17 22:42:37 +0000 UTC }] Feb 17 22:45:00.629: INFO: ss-1 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-17 22:44:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-17 22:44:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-17 22:44:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-17 22:44:14 +0000 UTC }] Feb 17 22:45:00.629: INFO: ss-2 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-17 22:44:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-17 22:44:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-17 22:44:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-17 22:44:14 +0000 UTC }] Feb 17 22:45:00.629: INFO: Feb 17 22:45:00.629: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 17 22:45:04.882: INFO: POD NODE PHASE GRACE CONDITIONS Feb 17 22:45:04.882: INFO: ss-0 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-17 22:42:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-17 22:44:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-17 22:44:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-17 22:42:37 +0000 UTC }] Feb 17 22:45:04.882: INFO: ss-1 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-17 22:44:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-17 22:44:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-17 22:44:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-17 22:44:14 +0000 UTC }] Feb 17 22:45:04.882: INFO: ss-2 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-17 22:44:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-17 22:44:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-17 22:44:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-17 22:44:14 +0000 UTC }] Feb 17 22:45:04.882: INFO: Feb 17 22:45:04.882: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 17 22:45:08.324: INFO: POD NODE PHASE GRACE CONDITIONS Feb 17 22:45:08.324: INFO: ss-0 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-17 22:42:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-17 22:44:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-17 22:44:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-17 22:42:37 +0000 UTC }] Feb 17 22:45:08.324: INFO: ss-1 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-17 22:44:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-17 22:44:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-17 22:44:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-17 22:44:14 +0000 UTC }] Feb 17 22:45:08.324: INFO: ss-2 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-17 22:44:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-17 22:44:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-17 22:44:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-17 22:44:14 +0000 UTC }] Feb 17 22:45:08.324: INFO: Feb 17 22:45:08.324: INFO: StatefulSet ss has not reached scale 0, at 3 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-407 Feb 17 22:45:09.513: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-407 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 22:45:17.186: INFO: rc: 1 Feb 17 22:45:17.187: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-407 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: Internal error occurred: error executing command in container: failed to exec in container: failed to create exec "d991ac8a96f5861b4ffcf2a3768d3cdcc69340f58eca0de8f4f8dc986f7a02e1": cannot exec in a deleted state: unknown [] 0xc002bc7d10 exit status 1 true [0xc00293e348 0xc00293e378 0xc00293e390] [0xc00293e348 0xc00293e378 0xc00293e390] [0xc00293e370 0xc00293e388] [0xba70e0 0xba70e0] 0xc002459ce0 }: Command stdout: stderr: error: Internal error occurred: error executing command in container: failed to exec in container: failed to create exec "d991ac8a96f5861b4ffcf2a3768d3cdcc69340f58eca0de8f4f8dc986f7a02e1": cannot exec in a deleted state: unknown error: exit status 1 Feb 17 22:45:27.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-407 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 22:45:27.809: INFO: rc: 1 Feb 17 22:45:27.809: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-407 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0027389f0 exit status 1 true [0xc0000f3900 0xc0000f3948 0xc0000f3a30] [0xc0000f3900 0xc0000f3948 0xc0000f3a30] [0xc0000f3940 0xc0000f39b8] [0xba70e0 0xba70e0] 0xc002125620 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 17 22:45:37.809: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-407 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 22:45:37.905: INFO: rc: 1 Feb 17 22:45:37.906: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-407 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001be72c0 exit status 1 true [0xc002a04128 0xc002a04140 0xc002a04158] [0xc002a04128 0xc002a04140 0xc002a04158] [0xc002a04138 0xc002a04150] [0xba70e0 0xba70e0] 0xc002963da0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 17 22:45:47.906: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-407 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 22:45:49.502: INFO: rc: 1 Feb 17 22:45:49.503: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-407 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002bc7e00 exit status 1 true [0xc00293e398 0xc00293e3b8 0xc00293e3d8] [0xc00293e398 0xc00293e3b8 0xc00293e3d8] [0xc00293e3b0 0xc00293e3d0] [0xba70e0 0xba70e0] 0xc0019df680 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 17 22:45:59.503: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-407 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 22:45:59.798: INFO: rc: 1 Feb 17 22:45:59.798: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-407 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0019ec090 exit status 1 true [0xc002a04008 0xc002a04020 0xc002a04038] [0xc002a04008 0xc002a04020 0xc002a04038] [0xc002a04018 0xc002a04030] [0xba70e0 0xba70e0] 0xc0021666c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 17 22:46:09.798: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-407 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 22:46:10.160: INFO: rc: 1 Feb 17 22:46:10.160: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-407 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002a44090 exit status 1 true [0xc002b9a008 0xc002b9a048 0xc002b9a088] [0xc002b9a008 0xc002b9a048 0xc002b9a088] [0xc002b9a030 0xc002b9a080] [0xba70e0 0xba70e0] 0xc002459aa0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 17 22:46:20.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-407 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 22:46:20.259: INFO: rc: 1 Feb 17 22:46:20.259: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-407 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0005860c0 exit status 1 true [0xc00293e000 0xc00293e018 0xc00293e030] [0xc00293e000 0xc00293e018 0xc00293e030] [0xc00293e010 0xc00293e028] [0xba70e0 0xba70e0] 0xc0027c4a20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 17 22:46:30.259: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-407 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 22:46:31.256: INFO: rc: 1 Feb 17 22:46:31.256: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-407 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002a44150 exit status 1 true [0xc002b9a090 0xc002b9a0a8 0xc002b9a0f8] [0xc002b9a090 0xc002b9a0a8 0xc002b9a0f8] [0xc002b9a0a0 0xc002b9a0d8] [0xba70e0 0xba70e0] 0xc002908fc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 17 22:46:41.256: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-407 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 22:46:42.935: INFO: rc: 1 Feb 17 22:46:42.935: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-407 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002a44240 exit status 1 true [0xc002b9a110 0xc002b9a168 0xc002b9a1b8] [0xc002b9a110 0xc002b9a168 0xc002b9a1b8] [0xc002b9a148 0xc002b9a198] [0xba70e0 0xba70e0] 0xc003df40c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 17 22:46:52.935: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-407 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 22:46:55.081: INFO: rc: 1 Feb 17 22:46:55.082: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-407 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002a44330 exit status 1 true [0xc002b9a1d0 0xc002b9a220 0xc002b9a250] [0xc002b9a1d0 0xc002b9a220 0xc002b9a250] [0xc002b9a208 0xc002b9a248] [0xba70e0 0xba70e0] 0xc003df4480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 17 22:47:05.082: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-407 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 22:47:05.169: INFO: rc: 1 Feb 17 22:47:05.169: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-407 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0005861b0 exit status 1 true [0xc00293e038 0xc00293e050 0xc00293e068] [0xc00293e038 0xc00293e050 0xc00293e068] [0xc00293e048 0xc00293e060] [0xba70e0 0xba70e0] 0xc002c98000 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 17 22:47:15.169: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-407 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 22:47:19.571: INFO: rc: 1 Feb 17 22:47:19.571: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-407 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000586270 exit status 1 true [0xc00293e070 0xc00293e088 0xc00293e0a0] [0xc00293e070 0xc00293e088 0xc00293e0a0] [0xc00293e080 0xc00293e098] [0xba70e0 0xba70e0] 0xc002c987e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 17 22:47:29.571: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-407 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 22:47:30.146: INFO: rc: 1 Feb 17 22:47:30.146: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-407 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0010ac0f0 exit status 1 true [0xc0000f20a8 0xc0000f2390 0xc0000f2470] [0xc0000f20a8 0xc0000f2390 0xc0000f2470] [0xc0000f22a0 0xc0000f2430] [0xba70e0 0xba70e0] 0xc002b3f680 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 17 22:47:40.146: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-407 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 22:47:40.495: INFO: rc: 1 Feb 17 22:47:40.495: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-407 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0019ec150 exit status 1 true [0xc002a04040 0xc002a04058 0xc002a04070] [0xc002a04040 0xc002a04058 0xc002a04070] [0xc002a04050 0xc002a04068] [0xba70e0 0xba70e0] 0xc0025d8300 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 17 22:47:50.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-407 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 22:47:51.876: INFO: rc: 1 Feb 17 22:47:51.877: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-407 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0019ec240 exit status 1 true [0xc002a04078 0xc002a04090 0xc002a040a8] [0xc002a04078 0xc002a04090 0xc002a040a8] [0xc002a04088 0xc002a040a0] [0xba70e0 0xba70e0] 0xc0025d87e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 17 22:48:01.877: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-407 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 22:48:03.086: INFO: rc: 1 Feb 17 22:48:03.086: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-407 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002a440c0 exit status 1 true [0xc002b9a018 0xc002b9a068 0xc002b9a090] [0xc002b9a018 0xc002b9a068 0xc002b9a090] [0xc002b9a048 0xc002b9a088] [0xba70e0 0xba70e0] 0xc003df4300 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 17 22:48:13.086: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-407 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 22:48:13.211: INFO: rc: 1 Feb 17 22:48:13.211: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-407 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000586090 exit status 1 true [0xc00293e000 0xc00293e018 0xc00293e030] [0xc00293e000 0xc00293e018 0xc00293e030] [0xc00293e010 0xc00293e028] [0xba70e0 0xba70e0] 0xc002908fc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 17 22:48:23.211: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-407 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 22:48:24.393: INFO: rc: 1 Feb 17 22:48:24.393: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-407 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0005861e0 exit status 1 true [0xc00293e038 0xc00293e050 0xc00293e068] [0xc00293e038 0xc00293e050 0xc00293e068] [0xc00293e048 0xc00293e060] [0xba70e0 0xba70e0] 0xc0027c4780 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 17 22:48:34.396: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-407 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 22:48:34.515: INFO: rc: 1 Feb 17 22:48:34.516: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-407 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002a441e0 exit status 1 true [0xc002b9a098 0xc002b9a0c0 0xc002b9a110] [0xc002b9a098 0xc002b9a0c0 0xc002b9a110] [0xc002b9a0a8 0xc002b9a0f8] [0xba70e0 0xba70e0] 0xc003df46c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 17 22:48:44.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-407 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 22:48:44.616: INFO: rc: 1 Feb 17 22:48:44.616: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-407 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000586300 exit status 1 true [0xc00293e070 0xc00293e088 0xc00293e0a0] [0xc00293e070 0xc00293e088 0xc00293e0a0] [0xc00293e080 0xc00293e098] [0xba70e0 0xba70e0] 0xc0027c5e00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 17 22:48:54.616: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-407 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 22:48:54.707: INFO: rc: 1 Feb 17 22:48:54.707: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-407 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0019ec0c0 exit status 1 true [0xc002a04000 0xc002a04018 0xc002a04030] [0xc002a04000 0xc002a04018 0xc002a04030] [0xc002a04010 0xc002a04028] [0xba70e0 0xba70e0] 0xc002459aa0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 17 22:49:04.707: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-407 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 22:49:04.795: INFO: rc: 1 Feb 17 22:49:04.795: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-407 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0005863c0 exit status 1 true [0xc00293e0a8 0xc00293e0c0 0xc00293e0d8] [0xc00293e0a8 0xc00293e0c0 0xc00293e0d8] [0xc00293e0b8 0xc00293e0d0] [0xba70e0 0xba70e0] 0xc002125620 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 17 22:49:14.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-407 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 22:49:15.007: INFO: rc: 1 Feb 17 22:49:15.007: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-407 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002a44360 exit status 1 true [0xc002b9a130 0xc002b9a180 0xc002b9a1d0] [0xc002b9a130 0xc002b9a180 0xc002b9a1d0] [0xc002b9a168 0xc002b9a1b8] [0xba70e0 0xba70e0] 0xc003df4a80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 17 22:49:25.007: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-407 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 22:49:25.185: INFO: rc: 1 Feb 17 22:49:25.186: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-407 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0005864b0 exit status 1 true [0xc00293e0e0 0xc00293e0f8 0xc00293e110] [0xc00293e0e0 0xc00293e0f8 0xc00293e110] [0xc00293e0f0 0xc00293e108] [0xba70e0 0xba70e0] 0xc002c98000 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 17 22:49:35.186: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-407 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 22:49:35.278: INFO: rc: 1 Feb 17 22:49:35.278: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-407 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0019ec1e0 exit status 1 true [0xc002a04038 0xc002a04050 0xc002a04068] [0xc002a04038 0xc002a04050 0xc002a04068] [0xc002a04048 0xc002a04060] [0xba70e0 0xba70e0] 0xc0025d8540 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 17 22:49:45.278: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-407 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 22:49:45.357: INFO: rc: 1 Feb 17 22:49:45.358: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-407 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0005865a0 exit status 1 true [0xc00293e118 0xc00293e160 0xc00293e1a0] [0xc00293e118 0xc00293e160 0xc00293e1a0] [0xc00293e148 0xc00293e180] [0xba70e0 0xba70e0] 0xc002c987e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 17 22:49:55.358: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-407 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 22:49:56.879: INFO: rc: 1 Feb 17 22:49:56.880: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-407 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000586690 exit status 1 true [0xc00293e1b8 0xc00293e1d0 0xc00293e220] [0xc00293e1b8 0xc00293e1d0 0xc00293e220] [0xc00293e1c8 0xc00293e208] [0xba70e0 0xba70e0] 0xc002c99ec0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 17 22:50:06.880: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-407 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 22:50:07.016: INFO: rc: 1 Feb 17 22:50:07.016: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-407 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0019ec060 exit status 1 true [0xc002a04008 0xc002a04020 0xc002a04038] [0xc002a04008 0xc002a04020 0xc002a04038] [0xc002a04018 0xc002a04030] [0xba70e0 0xba70e0] 0xc0021666c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 17 22:50:17.017: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-407 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 17 22:50:17.151: INFO: rc: 1 Feb 17 22:50:17.151: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: Feb 17 22:50:17.151: INFO: Scaling statefulset ss to 0 Feb 17 22:50:17.158: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Feb 17 22:50:17.160: INFO: Deleting all statefulset in ns statefulset-407 Feb 17 22:50:17.162: INFO: Scaling statefulset ss to 0 Feb 17 22:50:17.168: INFO: Waiting for statefulset status.replicas updated to 0 Feb 17 22:50:17.169: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 17 22:50:17.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-407" for this suite. Feb 17 22:50:33.750: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 22:50:33.831: INFO: namespace statefulset-407 deletion completed in 16.633367385s • [SLOW TEST:479.613 seconds] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Burst scaling should run to completion even with unhealthy pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 17 22:50:33.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 17 22:50:33.955: INFO: Creating deployment "test-recreate-deployment" Feb 17 22:50:34.558: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Feb 17 22:50:34.710: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Feb 17 22:50:36.714: INFO: Waiting deployment "test-recreate-deployment" to complete Feb 17 22:50:36.716: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63749199034, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63749199034, loc:(*time.Location)(0x7edea20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63749199036, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63749199034, loc:(*time.Location)(0x7edea20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 17 22:50:38.809: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63749199034, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63749199034, loc:(*time.Location)(0x7edea20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63749199036, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63749199034, loc:(*time.Location)(0x7edea20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 17 22:50:41.503: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63749199034, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63749199034, loc:(*time.Location)(0x7edea20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63749199036, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63749199034, loc:(*time.Location)(0x7edea20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 17 22:50:42.916: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63749199034, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63749199034, loc:(*time.Location)(0x7edea20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63749199036, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63749199034, loc:(*time.Location)(0x7edea20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 17 22:50:44.923: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63749199034, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63749199034, loc:(*time.Location)(0x7edea20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63749199036, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63749199034, loc:(*time.Location)(0x7edea20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 17 22:50:46.720: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63749199034, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63749199034, loc:(*time.Location)(0x7edea20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63749199036, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63749199034, loc:(*time.Location)(0x7edea20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 17 22:50:48.719: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63749199034, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63749199034, loc:(*time.Location)(0x7edea20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63749199036, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63749199034, loc:(*time.Location)(0x7edea20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 17 22:50:51.625: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63749199034, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63749199034, loc:(*time.Location)(0x7edea20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63749199036, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63749199034, loc:(*time.Location)(0x7edea20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 17 22:50:52.719: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63749199034, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63749199034, loc:(*time.Location)(0x7edea20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63749199036, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63749199034, loc:(*time.Location)(0x7edea20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 17 22:50:56.490: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63749199034, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63749199034, loc:(*time.Location)(0x7edea20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63749199036, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63749199034, loc:(*time.Location)(0x7edea20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 17 22:50:57.277: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63749199034, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63749199034, loc:(*time.Location)(0x7edea20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63749199036, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63749199034, loc:(*time.Location)(0x7edea20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 17 22:50:58.719: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63749199034, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63749199034, loc:(*time.Location)(0x7edea20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63749199036, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63749199034, loc:(*time.Location)(0x7edea20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 17 22:51:00.826: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63749199034, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63749199034, loc:(*time.Location)(0x7edea20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63749199036, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63749199034, loc:(*time.Location)(0x7edea20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 17 22:51:03.065: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Feb 17 22:51:03.114: INFO: Updating deployment test-recreate-deployment Feb 17 22:51:03.114: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Feb 17 22:51:11.372: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-202,SelfLink:/apis/apps/v1/namespaces/deployment-202/deployments/test-recreate-deployment,UID:1ba43ff0-110a-48e2-ab98-bdab64bbed27,ResourceVersion:6938860,Generation:2,CreationTimestamp:2021-02-17 22:50:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2021-02-17 22:51:08 +0000 UTC 2021-02-17 22:51:08 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2021-02-17 22:51:10 +0000 UTC 2021-02-17 22:50:34 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} Feb 17 22:51:11.424: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-202,SelfLink:/apis/apps/v1/namespaces/deployment-202/replicasets/test-recreate-deployment-5c8c9cc69d,UID:70344b57-5ea1-4095-93c1-c1ec4600b92f,ResourceVersion:6938858,Generation:1,CreationTimestamp:2021-02-17 22:51:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 1ba43ff0-110a-48e2-ab98-bdab64bbed27 0xc000ab6327 0xc000ab6328}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 17 22:51:11.424: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Feb 17 22:51:11.424: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-202,SelfLink:/apis/apps/v1/namespaces/deployment-202/replicasets/test-recreate-deployment-6df85df6b9,UID:b8b5579e-9616-4066-a7a6-29dea6e418f4,ResourceVersion:6938836,Generation:2,CreationTimestamp:2021-02-17 22:50:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 1ba43ff0-110a-48e2-ab98-bdab64bbed27 0xc000ab63f7 0xc000ab63f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 17 22:51:11.427: INFO: Pod "test-recreate-deployment-5c8c9cc69d-kgp2w" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-kgp2w,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-202,SelfLink:/api/v1/namespaces/deployment-202/pods/test-recreate-deployment-5c8c9cc69d-kgp2w,UID:b4ab7ba3-6c2e-4fd2-b680-3e31710db8c4,ResourceVersion:6938857,Generation:0,CreationTimestamp:2021-02-17 22:51:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d 70344b57-5ea1-4095-93c1-c1ec4600b92f 0xc001288927 0xc001288928}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lpf8f {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lpf8f,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-lpf8f true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001288a00} {node.kubernetes.io/unreachable Exists NoExecute 0xc001288a90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-17 22:51:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-17 22:51:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-17 22:51:08 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-17 22:51:08 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.7,PodIP:,StartTime:2021-02-17 22:51:08 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 17 22:51:11.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-202" for this suite. Feb 17 22:51:23.563: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 22:51:23.652: INFO: namespace deployment-202 deletion completed in 12.221821967s • [SLOW TEST:49.822 seconds] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 17 22:51:23.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-01720b35-6c00-436e-8e9a-92eb7b96e3ed STEP: Creating a pod to test consume configMaps Feb 17 22:51:23.874: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d41bfcd6-ab44-41c7-af32-5ee590431d75" in namespace "projected-1110" to be "success or failure" Feb 17 22:51:23.944: INFO: Pod "pod-projected-configmaps-d41bfcd6-ab44-41c7-af32-5ee590431d75": Phase="Pending", Reason="", readiness=false. Elapsed: 70.43645ms Feb 17 22:51:26.042: INFO: Pod "pod-projected-configmaps-d41bfcd6-ab44-41c7-af32-5ee590431d75": Phase="Pending", Reason="", readiness=false. Elapsed: 2.168067237s Feb 17 22:51:28.108: INFO: Pod "pod-projected-configmaps-d41bfcd6-ab44-41c7-af32-5ee590431d75": Phase="Pending", Reason="", readiness=false. Elapsed: 4.23444559s Feb 17 22:51:30.111: INFO: Pod "pod-projected-configmaps-d41bfcd6-ab44-41c7-af32-5ee590431d75": Phase="Pending", Reason="", readiness=false. Elapsed: 6.237784251s Feb 17 22:51:32.114: INFO: Pod "pod-projected-configmaps-d41bfcd6-ab44-41c7-af32-5ee590431d75": Phase="Pending", Reason="", readiness=false. Elapsed: 8.240327869s Feb 17 22:51:34.348: INFO: Pod "pod-projected-configmaps-d41bfcd6-ab44-41c7-af32-5ee590431d75": Phase="Running", Reason="", readiness=true. Elapsed: 10.47473568s Feb 17 22:51:36.352: INFO: Pod "pod-projected-configmaps-d41bfcd6-ab44-41c7-af32-5ee590431d75": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.478307052s STEP: Saw pod success Feb 17 22:51:36.352: INFO: Pod "pod-projected-configmaps-d41bfcd6-ab44-41c7-af32-5ee590431d75" satisfied condition "success or failure" Feb 17 22:51:36.354: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-d41bfcd6-ab44-41c7-af32-5ee590431d75 container projected-configmap-volume-test: STEP: delete the pod Feb 17 22:51:36.984: INFO: Waiting for pod pod-projected-configmaps-d41bfcd6-ab44-41c7-af32-5ee590431d75 to disappear Feb 17 22:51:37.251: INFO: Pod pod-projected-configmaps-d41bfcd6-ab44-41c7-af32-5ee590431d75 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 17 22:51:37.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1110" for this suite. Feb 17 22:51:47.313: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 22:51:47.385: INFO: namespace projected-1110 deletion completed in 10.129963285s • [SLOW TEST:23.732 seconds] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 17 22:51:47.385: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name projected-secret-test-9679c26e-4766-4d2e-889c-72e2530ef4d2 STEP: Creating a pod to test consume secrets Feb 17 22:51:49.084: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ef5a829d-e311-4531-aa99-bae471c37669" in namespace "projected-5567" to be "success or failure" Feb 17 22:51:49.105: INFO: Pod "pod-projected-secrets-ef5a829d-e311-4531-aa99-bae471c37669": Phase="Pending", Reason="", readiness=false. Elapsed: 21.549641ms Feb 17 22:51:51.300: INFO: Pod "pod-projected-secrets-ef5a829d-e311-4531-aa99-bae471c37669": Phase="Pending", Reason="", readiness=false. Elapsed: 2.216149351s Feb 17 22:51:53.635: INFO: Pod "pod-projected-secrets-ef5a829d-e311-4531-aa99-bae471c37669": Phase="Pending", Reason="", readiness=false. Elapsed: 4.551007526s Feb 17 22:51:55.639: INFO: Pod "pod-projected-secrets-ef5a829d-e311-4531-aa99-bae471c37669": Phase="Pending", Reason="", readiness=false. Elapsed: 6.554967119s Feb 17 22:51:57.643: INFO: Pod "pod-projected-secrets-ef5a829d-e311-4531-aa99-bae471c37669": Phase="Pending", Reason="", readiness=false. Elapsed: 8.559102324s Feb 17 22:51:59.908: INFO: Pod "pod-projected-secrets-ef5a829d-e311-4531-aa99-bae471c37669": Phase="Pending", Reason="", readiness=false. Elapsed: 10.824956448s Feb 17 22:52:02.128: INFO: Pod "pod-projected-secrets-ef5a829d-e311-4531-aa99-bae471c37669": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.044322705s STEP: Saw pod success Feb 17 22:52:02.128: INFO: Pod "pod-projected-secrets-ef5a829d-e311-4531-aa99-bae471c37669" satisfied condition "success or failure" Feb 17 22:52:02.131: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-ef5a829d-e311-4531-aa99-bae471c37669 container secret-volume-test: STEP: delete the pod Feb 17 22:52:03.406: INFO: Waiting for pod pod-projected-secrets-ef5a829d-e311-4531-aa99-bae471c37669 to disappear Feb 17 22:52:03.417: INFO: Pod pod-projected-secrets-ef5a829d-e311-4531-aa99-bae471c37669 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 17 22:52:03.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5567" for this suite. Feb 17 22:52:13.641: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 22:52:13.738: INFO: namespace projected-5567 deletion completed in 10.31803952s • [SLOW TEST:26.353 seconds] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 17 22:52:13.738: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-3761 [It] Should recreate evicted statefulset [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-3761 STEP: Creating statefulset with conflicting port in namespace statefulset-3761 STEP: Waiting until pod test-pod will start running in namespace statefulset-3761 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-3761 Feb 17 22:52:26.215: INFO: Observed stateful pod in namespace: statefulset-3761, name: ss-0, uid: 0688cc51-0241-4d61-9839-5d099cdb1834, status phase: Pending. Waiting for statefulset controller to delete. Feb 17 22:52:29.030: INFO: Observed stateful pod in namespace: statefulset-3761, name: ss-0, uid: 0688cc51-0241-4d61-9839-5d099cdb1834, status phase: Failed. Waiting for statefulset controller to delete. Feb 17 22:52:29.069: INFO: Observed stateful pod in namespace: statefulset-3761, name: ss-0, uid: 0688cc51-0241-4d61-9839-5d099cdb1834, status phase: Failed. Waiting for statefulset controller to delete. Feb 17 22:52:29.121: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-3761 STEP: Removing pod with conflicting port in namespace statefulset-3761 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-3761 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Feb 17 22:52:41.281: INFO: Deleting all statefulset in ns statefulset-3761 Feb 17 22:52:41.284: INFO: Scaling statefulset ss to 0 Feb 17 22:52:52.006: INFO: Waiting for statefulset status.replicas updated to 0 Feb 17 22:52:52.008: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 17 22:52:52.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3761" for this suite. Feb 17 22:53:02.168: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 22:53:02.298: INFO: namespace statefulset-3761 deletion completed in 10.180086017s • [SLOW TEST:48.560 seconds] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Should recreate evicted statefulset [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 17 22:53:02.299: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Feb 17 22:53:03.332: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 17 22:53:33.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5392" for this suite. Feb 17 22:54:04.582: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 22:54:06.047: INFO: namespace init-container-5392 deletion completed in 32.361606129s • [SLOW TEST:63.748 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 17 22:54:06.047: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-1570 STEP: creating a selector STEP: Creating the service pods in kubernetes Feb 17 22:54:06.451: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Feb 17 22:54:42.576: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.26:8080/dial?request=hostName&protocol=udp&host=10.244.1.114&port=8081&tries=1'] Namespace:pod-network-test-1570 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 17 22:54:42.576: INFO: >>> kubeConfig: /root/.kube/config Feb 17 22:54:42.864: INFO: Waiting for endpoints: map[] Feb 17 22:54:42.876: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.26:8080/dial?request=hostName&protocol=udp&host=10.244.2.25&port=8081&tries=1'] Namespace:pod-network-test-1570 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 17 22:54:42.876: INFO: >>> kubeConfig: /root/.kube/config Feb 17 22:54:43.024: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 17 22:54:43.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-1570" for this suite. Feb 17 22:55:10.064: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 22:55:10.572: INFO: namespace pod-network-test-1570 deletion completed in 27.54454704s • [SLOW TEST:64.525 seconds] [sig-network] Networking /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 17 22:55:10.572: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-5dd89f7d-2b1f-4bf8-8ecb-fc1010e099be STEP: Creating a pod to test consume configMaps Feb 17 22:55:12.133: INFO: Waiting up to 5m0s for pod "pod-configmaps-10007f1e-c100-4023-9a53-97c9450f8e49" in namespace "configmap-8486" to be "success or failure" Feb 17 22:55:12.613: INFO: Pod "pod-configmaps-10007f1e-c100-4023-9a53-97c9450f8e49": Phase="Pending", Reason="", readiness=false. Elapsed: 479.595437ms Feb 17 22:55:14.655: INFO: Pod "pod-configmaps-10007f1e-c100-4023-9a53-97c9450f8e49": Phase="Pending", Reason="", readiness=false. Elapsed: 2.521409843s Feb 17 22:55:16.658: INFO: Pod "pod-configmaps-10007f1e-c100-4023-9a53-97c9450f8e49": Phase="Pending", Reason="", readiness=false. Elapsed: 4.524529164s Feb 17 22:55:18.750: INFO: Pod "pod-configmaps-10007f1e-c100-4023-9a53-97c9450f8e49": Phase="Pending", Reason="", readiness=false. Elapsed: 6.616663022s Feb 17 22:55:20.870: INFO: Pod "pod-configmaps-10007f1e-c100-4023-9a53-97c9450f8e49": Phase="Running", Reason="", readiness=true. Elapsed: 8.736917861s Feb 17 22:55:22.874: INFO: Pod "pod-configmaps-10007f1e-c100-4023-9a53-97c9450f8e49": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.740619324s STEP: Saw pod success Feb 17 22:55:22.874: INFO: Pod "pod-configmaps-10007f1e-c100-4023-9a53-97c9450f8e49" satisfied condition "success or failure" Feb 17 22:55:22.876: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-10007f1e-c100-4023-9a53-97c9450f8e49 container configmap-volume-test: STEP: delete the pod Feb 17 22:55:22.914: INFO: Waiting for pod pod-configmaps-10007f1e-c100-4023-9a53-97c9450f8e49 to disappear Feb 17 22:55:22.972: INFO: Pod pod-configmaps-10007f1e-c100-4023-9a53-97c9450f8e49 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 17 22:55:22.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8486" for this suite. Feb 17 22:55:28.999: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 22:55:29.115: INFO: namespace configmap-8486 deletion completed in 6.14020351s • [SLOW TEST:18.543 seconds] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 17 22:55:29.116: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should contain environment variables for services [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 17 22:55:48.461: INFO: Waiting up to 5m0s for pod "client-envvars-7bbf71c0-d852-43d3-94c8-90038fb12a53" in namespace "pods-5153" to be "success or failure" Feb 17 22:55:48.589: INFO: Pod "client-envvars-7bbf71c0-d852-43d3-94c8-90038fb12a53": Phase="Pending", Reason="", readiness=false. Elapsed: 127.911203ms Feb 17 22:55:50.655: INFO: Pod "client-envvars-7bbf71c0-d852-43d3-94c8-90038fb12a53": Phase="Pending", Reason="", readiness=false. Elapsed: 2.193611572s Feb 17 22:55:52.659: INFO: Pod "client-envvars-7bbf71c0-d852-43d3-94c8-90038fb12a53": Phase="Pending", Reason="", readiness=false. Elapsed: 4.197735582s Feb 17 22:55:54.682: INFO: Pod "client-envvars-7bbf71c0-d852-43d3-94c8-90038fb12a53": Phase="Pending", Reason="", readiness=false. Elapsed: 6.22111532s Feb 17 22:55:57.620: INFO: Pod "client-envvars-7bbf71c0-d852-43d3-94c8-90038fb12a53": Phase="Pending", Reason="", readiness=false. Elapsed: 9.158269855s Feb 17 22:55:59.623: INFO: Pod "client-envvars-7bbf71c0-d852-43d3-94c8-90038fb12a53": Phase="Pending", Reason="", readiness=false. Elapsed: 11.161663208s Feb 17 22:56:02.099: INFO: Pod "client-envvars-7bbf71c0-d852-43d3-94c8-90038fb12a53": Phase="Running", Reason="", readiness=true. Elapsed: 13.637466816s Feb 17 22:56:04.249: INFO: Pod "client-envvars-7bbf71c0-d852-43d3-94c8-90038fb12a53": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.787300446s STEP: Saw pod success Feb 17 22:56:04.249: INFO: Pod "client-envvars-7bbf71c0-d852-43d3-94c8-90038fb12a53" satisfied condition "success or failure" Feb 17 22:56:05.428: INFO: Trying to get logs from node iruya-worker pod client-envvars-7bbf71c0-d852-43d3-94c8-90038fb12a53 container env3cont: STEP: delete the pod Feb 17 22:56:05.813: INFO: Waiting for pod client-envvars-7bbf71c0-d852-43d3-94c8-90038fb12a53 to disappear Feb 17 22:56:05.854: INFO: Pod client-envvars-7bbf71c0-d852-43d3-94c8-90038fb12a53 no longer exists [AfterEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 17 22:56:05.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5153" for this suite. Feb 17 22:56:54.117: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 22:56:54.356: INFO: namespace pods-5153 deletion completed in 48.499442389s • [SLOW TEST:85.240 seconds] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should contain environment variables for services [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 17 22:56:54.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should add annotations for pods in rc [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Feb 17 22:56:54.539: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7777' Feb 17 22:57:02.587: INFO: stderr: "" Feb 17 22:57:02.587: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Feb 17 22:57:03.591: INFO: Selector matched 1 pods for map[app:redis] Feb 17 22:57:03.591: INFO: Found 0 / 1 Feb 17 22:57:04.592: INFO: Selector matched 1 pods for map[app:redis] Feb 17 22:57:04.592: INFO: Found 0 / 1 Feb 17 22:57:06.674: INFO: Selector matched 1 pods for map[app:redis] Feb 17 22:57:06.674: INFO: Found 0 / 1 Feb 17 22:57:07.590: INFO: Selector matched 1 pods for map[app:redis] Feb 17 22:57:07.590: INFO: Found 0 / 1 Feb 17 22:57:08.590: INFO: Selector matched 1 pods for map[app:redis] Feb 17 22:57:08.591: INFO: Found 0 / 1 Feb 17 22:57:10.108: INFO: Selector matched 1 pods for map[app:redis] Feb 17 22:57:10.108: INFO: Found 0 / 1 Feb 17 22:57:10.591: INFO: Selector matched 1 pods for map[app:redis] Feb 17 22:57:10.591: INFO: Found 0 / 1 Feb 17 22:57:12.682: INFO: Selector matched 1 pods for map[app:redis] Feb 17 22:57:12.682: INFO: Found 0 / 1 Feb 17 22:57:13.591: INFO: Selector matched 1 pods for map[app:redis] Feb 17 22:57:13.592: INFO: Found 0 / 1 Feb 17 22:57:14.602: INFO: Selector matched 1 pods for map[app:redis] Feb 17 22:57:14.602: INFO: Found 0 / 1 Feb 17 22:57:16.522: INFO: Selector matched 1 pods for map[app:redis] Feb 17 22:57:16.522: INFO: Found 0 / 1 Feb 17 22:57:16.836: INFO: Selector matched 1 pods for map[app:redis] Feb 17 22:57:16.836: INFO: Found 0 / 1 Feb 17 22:57:17.680: INFO: Selector matched 1 pods for map[app:redis] Feb 17 22:57:17.680: INFO: Found 0 / 1 Feb 17 22:57:18.595: INFO: Selector matched 1 pods for map[app:redis] Feb 17 22:57:18.595: INFO: Found 0 / 1 Feb 17 22:57:20.093: INFO: Selector matched 1 pods for map[app:redis] Feb 17 22:57:20.093: INFO: Found 0 / 1 Feb 17 22:57:20.668: INFO: Selector matched 1 pods for map[app:redis] Feb 17 22:57:20.668: INFO: Found 0 / 1 Feb 17 22:57:21.592: INFO: Selector matched 1 pods for map[app:redis] Feb 17 22:57:21.592: INFO: Found 0 / 1 Feb 17 22:57:23.167: INFO: Selector matched 1 pods for map[app:redis] Feb 17 22:57:23.167: INFO: Found 0 / 1 Feb 17 22:57:23.590: INFO: Selector matched 1 pods for map[app:redis] Feb 17 22:57:23.590: INFO: Found 0 / 1 Feb 17 22:57:24.614: INFO: Selector matched 1 pods for map[app:redis] Feb 17 22:57:24.614: INFO: Found 0 / 1 Feb 17 22:57:25.728: INFO: Selector matched 1 pods for map[app:redis] Feb 17 22:57:25.728: INFO: Found 0 / 1 Feb 17 22:57:26.591: INFO: Selector matched 1 pods for map[app:redis] Feb 17 22:57:26.591: INFO: Found 0 / 1 Feb 17 22:57:27.675: INFO: Selector matched 1 pods for map[app:redis] Feb 17 22:57:27.675: INFO: Found 0 / 1 Feb 17 22:57:29.558: INFO: Selector matched 1 pods for map[app:redis] Feb 17 22:57:29.558: INFO: Found 1 / 1 Feb 17 22:57:29.558: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Feb 17 22:57:29.950: INFO: Selector matched 1 pods for map[app:redis] Feb 17 22:57:29.950: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Feb 17 22:57:29.950: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-mlwp4 --namespace=kubectl-7777 -p {"metadata":{"annotations":{"x":"y"}}}' Feb 17 22:57:30.300: INFO: stderr: "" Feb 17 22:57:30.300: INFO: stdout: "pod/redis-master-mlwp4 patched\n" STEP: checking annotations Feb 17 22:57:30.423: INFO: Selector matched 1 pods for map[app:redis] Feb 17 22:57:30.423: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 17 22:57:30.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7777" for this suite. Feb 17 22:57:54.548: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 22:57:54.613: INFO: namespace kubectl-7777 deletion completed in 24.179437211s • [SLOW TEST:60.257 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl patch /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should add annotations for pods in rc [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 17 22:57:54.613: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0217 22:57:56.372577 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 17 22:57:56.372: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 17 22:57:56.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8856" for this suite. Feb 17 22:58:02.395: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 17 22:58:02.467: INFO: namespace gc-8856 deletion completed in 6.0924562s • [SLOW TEST:7.854 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 17 22:58:02.467: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 17 22:58:02.607: INFO: (0) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 17 22:58:09.042: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4e49aea4-ca1c-43a6-8c62-255094f6df59" in namespace "downward-api-34" to be "success or failure"
Feb 17 22:58:09.081: INFO: Pod "downwardapi-volume-4e49aea4-ca1c-43a6-8c62-255094f6df59": Phase="Pending", Reason="", readiness=false. Elapsed: 38.222932ms
Feb 17 22:58:11.590: INFO: Pod "downwardapi-volume-4e49aea4-ca1c-43a6-8c62-255094f6df59": Phase="Pending", Reason="", readiness=false. Elapsed: 2.547364346s
Feb 17 22:58:13.593: INFO: Pod "downwardapi-volume-4e49aea4-ca1c-43a6-8c62-255094f6df59": Phase="Pending", Reason="", readiness=false. Elapsed: 4.551081702s
Feb 17 22:58:15.655: INFO: Pod "downwardapi-volume-4e49aea4-ca1c-43a6-8c62-255094f6df59": Phase="Pending", Reason="", readiness=false. Elapsed: 6.612784274s
Feb 17 22:58:17.794: INFO: Pod "downwardapi-volume-4e49aea4-ca1c-43a6-8c62-255094f6df59": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.751259503s
STEP: Saw pod success
Feb 17 22:58:17.794: INFO: Pod "downwardapi-volume-4e49aea4-ca1c-43a6-8c62-255094f6df59" satisfied condition "success or failure"
Feb 17 22:58:17.796: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-4e49aea4-ca1c-43a6-8c62-255094f6df59 container client-container: 
STEP: delete the pod
Feb 17 22:58:17.872: INFO: Waiting for pod downwardapi-volume-4e49aea4-ca1c-43a6-8c62-255094f6df59 to disappear
Feb 17 22:58:17.925: INFO: Pod downwardapi-volume-4e49aea4-ca1c-43a6-8c62-255094f6df59 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 22:58:17.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-34" for this suite.
Feb 17 22:58:23.942: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 22:58:24.008: INFO: namespace downward-api-34 deletion completed in 6.079724396s

• [SLOW TEST:15.253 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 22:58:24.008: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-6a106bcd-9e04-46df-97da-56c2495849d8
STEP: Creating a pod to test consume secrets
Feb 17 22:58:24.239: INFO: Waiting up to 5m0s for pod "pod-secrets-0bebe179-4522-4e9d-ae30-29f01fd1e2fb" in namespace "secrets-4432" to be "success or failure"
Feb 17 22:58:24.255: INFO: Pod "pod-secrets-0bebe179-4522-4e9d-ae30-29f01fd1e2fb": Phase="Pending", Reason="", readiness=false. Elapsed: 15.624368ms
Feb 17 22:58:26.258: INFO: Pod "pod-secrets-0bebe179-4522-4e9d-ae30-29f01fd1e2fb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01854689s
Feb 17 22:58:28.261: INFO: Pod "pod-secrets-0bebe179-4522-4e9d-ae30-29f01fd1e2fb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021934088s
Feb 17 22:58:30.585: INFO: Pod "pod-secrets-0bebe179-4522-4e9d-ae30-29f01fd1e2fb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.345944095s
Feb 17 22:58:32.770: INFO: Pod "pod-secrets-0bebe179-4522-4e9d-ae30-29f01fd1e2fb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.531025815s
Feb 17 22:58:34.944: INFO: Pod "pod-secrets-0bebe179-4522-4e9d-ae30-29f01fd1e2fb": Phase="Running", Reason="", readiness=true. Elapsed: 10.705075596s
Feb 17 22:58:36.947: INFO: Pod "pod-secrets-0bebe179-4522-4e9d-ae30-29f01fd1e2fb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.707709177s
STEP: Saw pod success
Feb 17 22:58:36.947: INFO: Pod "pod-secrets-0bebe179-4522-4e9d-ae30-29f01fd1e2fb" satisfied condition "success or failure"
Feb 17 22:58:36.949: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-0bebe179-4522-4e9d-ae30-29f01fd1e2fb container secret-volume-test: 
STEP: delete the pod
Feb 17 22:58:38.080: INFO: Waiting for pod pod-secrets-0bebe179-4522-4e9d-ae30-29f01fd1e2fb to disappear
Feb 17 22:58:38.105: INFO: Pod pod-secrets-0bebe179-4522-4e9d-ae30-29f01fd1e2fb no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 22:58:38.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4432" for this suite.
Feb 17 22:58:48.261: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 22:58:48.841: INFO: namespace secrets-4432 deletion completed in 10.732136712s

• [SLOW TEST:24.833 seconds]
[sig-storage] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 22:58:48.841: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl logs
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292
STEP: creating an rc
Feb 17 22:58:49.056: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3405'
Feb 17 22:58:50.546: INFO: stderr: ""
Feb 17 22:58:50.546: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Waiting for Redis master to start.
Feb 17 22:58:51.550: INFO: Selector matched 1 pods for map[app:redis]
Feb 17 22:58:51.550: INFO: Found 0 / 1
Feb 17 22:58:52.550: INFO: Selector matched 1 pods for map[app:redis]
Feb 17 22:58:52.550: INFO: Found 0 / 1
Feb 17 22:58:54.637: INFO: Selector matched 1 pods for map[app:redis]
Feb 17 22:58:54.637: INFO: Found 0 / 1
Feb 17 22:58:55.549: INFO: Selector matched 1 pods for map[app:redis]
Feb 17 22:58:55.550: INFO: Found 0 / 1
Feb 17 22:58:56.549: INFO: Selector matched 1 pods for map[app:redis]
Feb 17 22:58:56.549: INFO: Found 0 / 1
Feb 17 22:59:00.408: INFO: Selector matched 1 pods for map[app:redis]
Feb 17 22:59:00.408: INFO: Found 0 / 1
Feb 17 22:59:00.591: INFO: Selector matched 1 pods for map[app:redis]
Feb 17 22:59:00.591: INFO: Found 0 / 1
Feb 17 22:59:01.615: INFO: Selector matched 1 pods for map[app:redis]
Feb 17 22:59:01.615: INFO: Found 0 / 1
Feb 17 22:59:02.672: INFO: Selector matched 1 pods for map[app:redis]
Feb 17 22:59:02.672: INFO: Found 0 / 1
Feb 17 22:59:03.729: INFO: Selector matched 1 pods for map[app:redis]
Feb 17 22:59:03.729: INFO: Found 0 / 1
Feb 17 22:59:04.574: INFO: Selector matched 1 pods for map[app:redis]
Feb 17 22:59:04.574: INFO: Found 0 / 1
Feb 17 22:59:05.549: INFO: Selector matched 1 pods for map[app:redis]
Feb 17 22:59:05.549: INFO: Found 1 / 1
Feb 17 22:59:05.549: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Feb 17 22:59:05.550: INFO: Selector matched 1 pods for map[app:redis]
Feb 17 22:59:05.550: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Feb 17 22:59:05.551: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-kcvkn redis-master --namespace=kubectl-3405'
Feb 17 22:59:05.659: INFO: stderr: ""
Feb 17 22:59:05.659: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 17 Feb 22:59:04.209 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 17 Feb 22:59:04.209 # Server started, Redis version 3.2.12\n1:M 17 Feb 22:59:04.209 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 17 Feb 22:59:04.209 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Feb 17 22:59:05.659: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-kcvkn redis-master --namespace=kubectl-3405 --tail=1'
Feb 17 22:59:05.762: INFO: stderr: ""
Feb 17 22:59:05.762: INFO: stdout: "1:M 17 Feb 22:59:04.209 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Feb 17 22:59:05.762: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-kcvkn redis-master --namespace=kubectl-3405 --limit-bytes=1'
Feb 17 22:59:05.860: INFO: stderr: ""
Feb 17 22:59:05.860: INFO: stdout: " "
STEP: exposing timestamps
Feb 17 22:59:05.860: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-kcvkn redis-master --namespace=kubectl-3405 --tail=1 --timestamps'
Feb 17 22:59:05.955: INFO: stderr: ""
Feb 17 22:59:05.955: INFO: stdout: "2021-02-17T22:59:04.209907562Z 1:M 17 Feb 22:59:04.209 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Feb 17 22:59:08.455: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-kcvkn redis-master --namespace=kubectl-3405 --since=1s'
Feb 17 22:59:08.550: INFO: stderr: ""
Feb 17 22:59:08.550: INFO: stdout: ""
Feb 17 22:59:08.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-kcvkn redis-master --namespace=kubectl-3405 --since=24h'
Feb 17 22:59:08.648: INFO: stderr: ""
Feb 17 22:59:08.649: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 17 Feb 22:59:04.209 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 17 Feb 22:59:04.209 # Server started, Redis version 3.2.12\n1:M 17 Feb 22:59:04.209 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 17 Feb 22:59:04.209 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
STEP: using delete to clean up resources
Feb 17 22:59:08.649: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3405'
Feb 17 22:59:08.791: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 17 22:59:08.791: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Feb 17 22:59:08.792: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-3405'
Feb 17 22:59:08.915: INFO: stderr: "No resources found.\n"
Feb 17 22:59:08.915: INFO: stdout: ""
Feb 17 22:59:08.916: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-3405 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb 17 22:59:08.989: INFO: stderr: ""
Feb 17 22:59:08.989: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 22:59:08.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3405" for this suite.
Feb 17 22:59:21.049: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 22:59:21.122: INFO: namespace kubectl-3405 deletion completed in 12.129927142s

• [SLOW TEST:32.281 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl logs
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be able to retrieve and filter logs  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 22:59:21.122: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Feb 17 22:59:27.806: INFO: Pod name wrapped-volume-race-90264063-fb21-4471-8680-66d7fdbc25dd: Found 0 pods out of 5
Feb 17 22:59:34.048: INFO: Pod name wrapped-volume-race-90264063-fb21-4471-8680-66d7fdbc25dd: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-90264063-fb21-4471-8680-66d7fdbc25dd in namespace emptydir-wrapper-4657, will wait for the garbage collector to delete the pods
Feb 17 23:01:31.204: INFO: Deleting ReplicationController wrapped-volume-race-90264063-fb21-4471-8680-66d7fdbc25dd took: 1.680853565s
Feb 17 23:01:34.104: INFO: Terminating ReplicationController wrapped-volume-race-90264063-fb21-4471-8680-66d7fdbc25dd pods took: 2.900277274s
STEP: Creating RC which spawns configmap-volume pods
Feb 17 23:04:07.562: INFO: Pod name wrapped-volume-race-c6ad487d-9e80-48ee-a1ef-d52cc4a538ca: Found 0 pods out of 5
Feb 17 23:04:16.172: INFO: Pod name wrapped-volume-race-c6ad487d-9e80-48ee-a1ef-d52cc4a538ca: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-c6ad487d-9e80-48ee-a1ef-d52cc4a538ca in namespace emptydir-wrapper-4657, will wait for the garbage collector to delete the pods
Feb 17 23:06:33.391: INFO: Deleting ReplicationController wrapped-volume-race-c6ad487d-9e80-48ee-a1ef-d52cc4a538ca took: 10.718287ms
Feb 17 23:06:35.891: INFO: Terminating ReplicationController wrapped-volume-race-c6ad487d-9e80-48ee-a1ef-d52cc4a538ca pods took: 2.500183858s
STEP: Creating RC which spawns configmap-volume pods
Feb 17 23:09:32.981: INFO: Pod name wrapped-volume-race-bf0dddb5-db75-47f9-b461-ffa782d2ae31: Found 0 pods out of 5
Feb 17 23:09:40.117: INFO: Pod name wrapped-volume-race-bf0dddb5-db75-47f9-b461-ffa782d2ae31: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-bf0dddb5-db75-47f9-b461-ffa782d2ae31 in namespace emptydir-wrapper-4657, will wait for the garbage collector to delete the pods
Feb 17 23:10:28.180: INFO: Deleting ReplicationController wrapped-volume-race-bf0dddb5-db75-47f9-b461-ffa782d2ae31 took: 5.004058ms
Feb 17 23:10:29.780: INFO: Terminating ReplicationController wrapped-volume-race-bf0dddb5-db75-47f9-b461-ffa782d2ae31 pods took: 1.600311176s
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 23:12:08.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-4657" for this suite.
Feb 17 23:13:58.645: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 23:13:58.904: INFO: namespace emptydir-wrapper-4657 deletion completed in 1m50.7379378s

• [SLOW TEST:877.782 seconds]
[sig-storage] EmptyDir wrapper volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 23:13:58.904: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 17 23:13:59.531: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Feb 17 23:13:59.556: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:14:00.362: INFO: Number of nodes with available pods: 0
Feb 17 23:14:00.362: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:14:02.288: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:14:07.352: INFO: Number of nodes with available pods: 0
Feb 17 23:14:07.352: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:14:11.969: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:14:14.640: INFO: Number of nodes with available pods: 0
Feb 17 23:14:14.640: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:14:16.567: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:14:16.569: INFO: Number of nodes with available pods: 0
Feb 17 23:14:16.569: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:14:17.491: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:14:17.494: INFO: Number of nodes with available pods: 0
Feb 17 23:14:17.494: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:14:18.375: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:14:18.378: INFO: Number of nodes with available pods: 0
Feb 17 23:14:18.378: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:14:19.373: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:14:19.414: INFO: Number of nodes with available pods: 0
Feb 17 23:14:19.415: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:14:20.366: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:14:20.370: INFO: Number of nodes with available pods: 0
Feb 17 23:14:20.370: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:14:21.397: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:14:22.594: INFO: Number of nodes with available pods: 0
Feb 17 23:14:22.594: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:14:23.571: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:14:24.398: INFO: Number of nodes with available pods: 0
Feb 17 23:14:24.398: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:14:26.622: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:14:26.916: INFO: Number of nodes with available pods: 0
Feb 17 23:14:26.916: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:14:27.368: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:14:27.371: INFO: Number of nodes with available pods: 0
Feb 17 23:14:27.371: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:14:28.578: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:14:28.750: INFO: Number of nodes with available pods: 0
Feb 17 23:14:28.750: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:14:29.844: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:14:29.942: INFO: Number of nodes with available pods: 0
Feb 17 23:14:29.942: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:14:30.367: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:14:30.370: INFO: Number of nodes with available pods: 0
Feb 17 23:14:30.370: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:14:31.366: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:14:31.368: INFO: Number of nodes with available pods: 0
Feb 17 23:14:31.368: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:14:32.367: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:14:32.370: INFO: Number of nodes with available pods: 0
Feb 17 23:14:32.370: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:14:33.641: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:14:33.644: INFO: Number of nodes with available pods: 0
Feb 17 23:14:33.644: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:14:34.367: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:14:34.371: INFO: Number of nodes with available pods: 0
Feb 17 23:14:34.371: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:14:35.366: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:14:35.369: INFO: Number of nodes with available pods: 0
Feb 17 23:14:35.369: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:14:36.887: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:14:37.716: INFO: Number of nodes with available pods: 0
Feb 17 23:14:37.716: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:14:38.383: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:14:38.386: INFO: Number of nodes with available pods: 0
Feb 17 23:14:38.386: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:14:39.365: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:14:39.367: INFO: Number of nodes with available pods: 0
Feb 17 23:14:39.367: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:14:40.428: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:14:40.431: INFO: Number of nodes with available pods: 0
Feb 17 23:14:40.431: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:14:41.367: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:14:41.370: INFO: Number of nodes with available pods: 0
Feb 17 23:14:41.370: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:14:42.366: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:14:42.369: INFO: Number of nodes with available pods: 0
Feb 17 23:14:42.369: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:14:43.872: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:14:43.874: INFO: Number of nodes with available pods: 0
Feb 17 23:14:43.874: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:14:45.378: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:14:45.991: INFO: Number of nodes with available pods: 0
Feb 17 23:14:45.991: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:14:46.773: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:14:47.478: INFO: Number of nodes with available pods: 0
Feb 17 23:14:47.478: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:14:48.581: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:14:48.983: INFO: Number of nodes with available pods: 0
Feb 17 23:14:48.983: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:14:49.496: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:14:49.916: INFO: Number of nodes with available pods: 0
Feb 17 23:14:49.916: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:14:51.463: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:14:52.982: INFO: Number of nodes with available pods: 0
Feb 17 23:14:52.982: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:14:54.490: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:14:54.939: INFO: Number of nodes with available pods: 0
Feb 17 23:14:54.939: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:14:56.304: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:14:56.611: INFO: Number of nodes with available pods: 0
Feb 17 23:14:56.611: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:14:57.769: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:14:57.773: INFO: Number of nodes with available pods: 0
Feb 17 23:14:57.773: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:14:58.889: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:14:58.906: INFO: Number of nodes with available pods: 0
Feb 17 23:14:58.906: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:15:01.001: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:15:01.004: INFO: Number of nodes with available pods: 0
Feb 17 23:15:01.004: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:15:01.366: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:15:01.369: INFO: Number of nodes with available pods: 0
Feb 17 23:15:01.369: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:15:02.366: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:15:02.369: INFO: Number of nodes with available pods: 0
Feb 17 23:15:02.369: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:15:04.408: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:15:04.471: INFO: Number of nodes with available pods: 0
Feb 17 23:15:04.471: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:15:06.475: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:15:07.086: INFO: Number of nodes with available pods: 0
Feb 17 23:15:07.086: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:15:08.240: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:15:08.683: INFO: Number of nodes with available pods: 0
Feb 17 23:15:08.683: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:15:09.367: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:15:09.370: INFO: Number of nodes with available pods: 0
Feb 17 23:15:09.370: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:15:10.367: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:15:10.370: INFO: Number of nodes with available pods: 0
Feb 17 23:15:10.370: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:15:12.565: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:15:12.834: INFO: Number of nodes with available pods: 0
Feb 17 23:15:12.834: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:15:13.367: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:15:13.370: INFO: Number of nodes with available pods: 0
Feb 17 23:15:13.370: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:15:14.367: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:15:14.370: INFO: Number of nodes with available pods: 0
Feb 17 23:15:14.370: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:15:15.479: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:15:15.482: INFO: Number of nodes with available pods: 0
Feb 17 23:15:15.482: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:15:16.367: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:15:16.370: INFO: Number of nodes with available pods: 0
Feb 17 23:15:16.370: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:15:18.363: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:15:19.092: INFO: Number of nodes with available pods: 2
Feb 17 23:15:19.092: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Feb 17 23:15:20.162: INFO: Wrong image for pod: daemon-set-v8xsf. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:15:20.162: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:15:20.905: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:15:22.027: INFO: Wrong image for pod: daemon-set-v8xsf. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:15:22.027: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:15:22.031: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:15:22.909: INFO: Wrong image for pod: daemon-set-v8xsf. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:15:22.909: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:15:22.912: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:15:23.948: INFO: Wrong image for pod: daemon-set-v8xsf. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:15:23.948: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:15:23.951: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:15:24.908: INFO: Wrong image for pod: daemon-set-v8xsf. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:15:24.908: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:15:24.911: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:15:25.908: INFO: Wrong image for pod: daemon-set-v8xsf. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:15:25.908: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:15:25.912: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:15:26.908: INFO: Wrong image for pod: daemon-set-v8xsf. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:15:26.908: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:15:26.910: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:15:28.222: INFO: Wrong image for pod: daemon-set-v8xsf. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:15:28.222: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:15:28.260: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:15:28.918: INFO: Wrong image for pod: daemon-set-v8xsf. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:15:28.918: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:15:29.144: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:15:29.908: INFO: Wrong image for pod: daemon-set-v8xsf. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:15:29.908: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:15:29.911: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:15:31.865: INFO: Wrong image for pod: daemon-set-v8xsf. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:15:31.865: INFO: Pod daemon-set-v8xsf is not available
Feb 17 23:15:31.865: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:15:31.871: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:15:32.001: INFO: Wrong image for pod: daemon-set-v8xsf. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:15:32.001: INFO: Pod daemon-set-v8xsf is not available
Feb 17 23:15:32.001: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:15:32.003: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:15:32.909: INFO: Wrong image for pod: daemon-set-v8xsf. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:15:32.909: INFO: Pod daemon-set-v8xsf is not available
Feb 17 23:15:32.909: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:15:32.913: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:15:34.114: INFO: Wrong image for pod: daemon-set-v8xsf. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:15:34.114: INFO: Pod daemon-set-v8xsf is not available
Feb 17 23:15:34.114: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:15:34.117: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:15:36.463: INFO: Wrong image for pod: daemon-set-v8xsf. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:15:36.463: INFO: Pod daemon-set-v8xsf is not available
Feb 17 23:15:36.463: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:15:36.467: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:15:36.908: INFO: Wrong image for pod: daemon-set-v8xsf. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:15:36.908: INFO: Pod daemon-set-v8xsf is not available
Feb 17 23:15:36.908: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:15:36.911: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:15:38.276: INFO: Wrong image for pod: daemon-set-v8xsf. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:15:38.276: INFO: Pod daemon-set-v8xsf is not available
Feb 17 23:15:38.276: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:15:38.279: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:15:40.069: INFO: Wrong image for pod: daemon-set-v8xsf. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:15:40.069: INFO: Pod daemon-set-v8xsf is not available
Feb 17 23:15:40.069: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:15:40.361: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:15:40.908: INFO: Wrong image for pod: daemon-set-v8xsf. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:15:40.908: INFO: Pod daemon-set-v8xsf is not available
Feb 17 23:15:40.908: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:15:40.911: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:15:41.910: INFO: Wrong image for pod: daemon-set-v8xsf. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:15:41.910: INFO: Pod daemon-set-v8xsf is not available
Feb 17 23:15:41.910: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:15:41.914: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:15:43.766: INFO: Pod daemon-set-ghr2j is not available
Feb 17 23:15:43.766: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:15:45.632: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:15:47.137: INFO: Pod daemon-set-ghr2j is not available
Feb 17 23:15:47.137: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:15:47.140: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:15:49.969: INFO: Pod daemon-set-ghr2j is not available
Feb 17 23:15:49.970: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:15:51.326: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:15:52.839: INFO: Pod daemon-set-ghr2j is not available
Feb 17 23:15:52.839: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:15:52.843: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:15:52.968: INFO: Pod daemon-set-ghr2j is not available
Feb 17 23:15:52.968: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:15:53.130: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:15:53.988: INFO: Pod daemon-set-ghr2j is not available
Feb 17 23:15:53.988: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:15:53.990: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:15:55.216: INFO: Pod daemon-set-ghr2j is not available
Feb 17 23:15:55.216: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:15:55.220: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:15:56.148: INFO: Pod daemon-set-ghr2j is not available
Feb 17 23:15:56.149: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:15:56.151: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:15:56.908: INFO: Pod daemon-set-ghr2j is not available
Feb 17 23:15:56.908: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:15:56.911: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:15:58.964: INFO: Pod daemon-set-ghr2j is not available
Feb 17 23:15:58.964: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:15:58.970: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:16:00.013: INFO: Pod daemon-set-ghr2j is not available
Feb 17 23:16:00.013: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:16:00.017: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:16:00.908: INFO: Pod daemon-set-ghr2j is not available
Feb 17 23:16:00.908: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:16:00.910: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:16:01.980: INFO: Pod daemon-set-ghr2j is not available
Feb 17 23:16:01.980: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:16:02.471: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:16:02.908: INFO: Pod daemon-set-ghr2j is not available
Feb 17 23:16:02.908: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:16:02.912: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:16:04.202: INFO: Pod daemon-set-ghr2j is not available
Feb 17 23:16:04.202: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:16:04.205: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:16:05.531: INFO: Pod daemon-set-ghr2j is not available
Feb 17 23:16:05.531: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:16:05.534: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:16:06.150: INFO: Pod daemon-set-ghr2j is not available
Feb 17 23:16:06.150: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:16:06.152: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:16:06.976: INFO: Pod daemon-set-ghr2j is not available
Feb 17 23:16:06.977: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:16:06.980: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:16:09.905: INFO: Pod daemon-set-ghr2j is not available
Feb 17 23:16:09.905: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:16:10.121: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:16:11.043: INFO: Pod daemon-set-ghr2j is not available
Feb 17 23:16:11.043: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:16:11.047: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:16:12.022: INFO: Pod daemon-set-ghr2j is not available
Feb 17 23:16:12.022: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:16:12.025: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:16:15.388: INFO: Pod daemon-set-ghr2j is not available
Feb 17 23:16:15.388: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:16:15.843: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:16:16.204: INFO: Pod daemon-set-ghr2j is not available
Feb 17 23:16:16.204: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:16:16.208: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:16:16.909: INFO: Pod daemon-set-ghr2j is not available
Feb 17 23:16:16.909: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:16:16.913: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:16:19.553: INFO: Pod daemon-set-ghr2j is not available
Feb 17 23:16:19.553: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:16:19.559: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:16:20.803: INFO: Pod daemon-set-ghr2j is not available
Feb 17 23:16:20.804: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:16:20.807: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:16:21.084: INFO: Pod daemon-set-ghr2j is not available
Feb 17 23:16:21.084: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:16:21.087: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:16:21.980: INFO: Pod daemon-set-ghr2j is not available
Feb 17 23:16:21.980: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:16:21.984: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:16:23.001: INFO: Pod daemon-set-ghr2j is not available
Feb 17 23:16:23.001: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:16:23.003: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:16:23.909: INFO: Pod daemon-set-ghr2j is not available
Feb 17 23:16:23.909: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:16:23.913: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:16:25.481: INFO: Pod daemon-set-ghr2j is not available
Feb 17 23:16:25.481: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:16:25.484: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:16:26.219: INFO: Pod daemon-set-ghr2j is not available
Feb 17 23:16:26.219: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:16:26.296: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:16:27.025: INFO: Pod daemon-set-ghr2j is not available
Feb 17 23:16:27.025: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:16:27.027: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:16:30.105: INFO: Pod daemon-set-ghr2j is not available
Feb 17 23:16:30.105: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:16:30.319: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:16:30.908: INFO: Pod daemon-set-ghr2j is not available
Feb 17 23:16:30.908: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:16:30.910: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:16:31.909: INFO: Pod daemon-set-ghr2j is not available
Feb 17 23:16:31.909: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:16:31.913: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:16:40.576: INFO: Pod daemon-set-ghr2j is not available
Feb 17 23:16:40.576: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:16:43.825: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:16:44.966: INFO: Pod daemon-set-ghr2j is not available
Feb 17 23:16:44.966: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:16:45.743: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:16:47.177: INFO: Pod daemon-set-ghr2j is not available
Feb 17 23:16:47.177: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:16:47.187: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:16:48.267: INFO: Pod daemon-set-ghr2j is not available
Feb 17 23:16:48.267: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:16:48.459: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:16:48.908: INFO: Pod daemon-set-ghr2j is not available
Feb 17 23:16:48.908: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:16:48.911: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:16:49.908: INFO: Pod daemon-set-ghr2j is not available
Feb 17 23:16:49.908: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:16:49.920: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:16:50.909: INFO: Pod daemon-set-ghr2j is not available
Feb 17 23:16:50.909: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:16:50.912: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:16:51.926: INFO: Pod daemon-set-ghr2j is not available
Feb 17 23:16:51.926: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:16:51.929: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:16:53.079: INFO: Pod daemon-set-ghr2j is not available
Feb 17 23:16:53.079: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:16:53.082: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:16:54.497: INFO: Pod daemon-set-ghr2j is not available
Feb 17 23:16:54.497: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:16:54.501: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:16:54.944: INFO: Pod daemon-set-ghr2j is not available
Feb 17 23:16:54.944: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:16:54.947: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:16:56.092: INFO: Pod daemon-set-ghr2j is not available
Feb 17 23:16:56.092: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:16:56.096: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:16:56.908: INFO: Pod daemon-set-ghr2j is not available
Feb 17 23:16:56.908: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:16:56.911: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:16:58.824: INFO: Pod daemon-set-ghr2j is not available
Feb 17 23:16:58.824: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:17:00.963: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:17:02.343: INFO: Pod daemon-set-ghr2j is not available
Feb 17 23:17:02.343: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:17:02.346: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:17:03.175: INFO: Pod daemon-set-ghr2j is not available
Feb 17 23:17:03.175: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:17:03.226: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:17:06.455: INFO: Pod daemon-set-ghr2j is not available
Feb 17 23:17:06.455: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:17:06.462: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:17:07.097: INFO: Pod daemon-set-ghr2j is not available
Feb 17 23:17:07.097: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:17:07.099: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:17:08.223: INFO: Pod daemon-set-ghr2j is not available
Feb 17 23:17:08.223: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:17:08.226: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:17:09.013: INFO: Pod daemon-set-ghr2j is not available
Feb 17 23:17:09.013: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:17:09.017: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:17:11.293: INFO: Pod daemon-set-ghr2j is not available
Feb 17 23:17:11.293: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:17:11.608: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:17:11.908: INFO: Pod daemon-set-ghr2j is not available
Feb 17 23:17:11.908: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:17:11.911: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:17:13.345: INFO: Pod daemon-set-ghr2j is not available
Feb 17 23:17:13.345: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:17:14.200: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:17:15.238: INFO: Pod daemon-set-ghr2j is not available
Feb 17 23:17:15.238: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:17:15.241: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:17:16.169: INFO: Pod daemon-set-ghr2j is not available
Feb 17 23:17:16.169: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:17:16.172: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:17:16.911: INFO: Pod daemon-set-ghr2j is not available
Feb 17 23:17:16.911: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:17:16.914: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:17:18.193: INFO: Pod daemon-set-ghr2j is not available
Feb 17 23:17:18.193: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:17:18.239: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:17:18.908: INFO: Pod daemon-set-ghr2j is not available
Feb 17 23:17:18.908: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:17:18.911: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:17:19.908: INFO: Pod daemon-set-ghr2j is not available
Feb 17 23:17:19.908: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:17:19.913: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:17:22.740: INFO: Pod daemon-set-ghr2j is not available
Feb 17 23:17:22.740: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:17:23.521: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:17:24.373: INFO: Pod daemon-set-ghr2j is not available
Feb 17 23:17:24.373: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:17:24.376: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:17:25.732: INFO: Pod daemon-set-ghr2j is not available
Feb 17 23:17:25.732: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:17:25.735: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:17:26.342: INFO: Pod daemon-set-ghr2j is not available
Feb 17 23:17:26.342: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:17:26.768: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:17:27.095: INFO: Pod daemon-set-ghr2j is not available
Feb 17 23:17:27.095: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:17:27.098: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:17:28.482: INFO: Pod daemon-set-ghr2j is not available
Feb 17 23:17:28.482: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:17:28.485: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:17:32.443: INFO: Pod daemon-set-ghr2j is not available
Feb 17 23:17:32.443: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:17:34.506: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:17:35.727: INFO: Pod daemon-set-ghr2j is not available
Feb 17 23:17:35.727: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:17:38.819: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:17:39.086: INFO: Pod daemon-set-ghr2j is not available
Feb 17 23:17:39.086: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:17:39.089: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:17:40.955: INFO: Pod daemon-set-ghr2j is not available
Feb 17 23:17:40.955: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:17:41.421: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:17:42.104: INFO: Pod daemon-set-ghr2j is not available
Feb 17 23:17:42.104: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:17:42.108: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:17:42.908: INFO: Pod daemon-set-ghr2j is not available
Feb 17 23:17:42.908: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:17:42.911: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:17:44.146: INFO: Pod daemon-set-ghr2j is not available
Feb 17 23:17:44.146: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:17:44.148: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:17:48.111: INFO: Pod daemon-set-ghr2j is not available
Feb 17 23:17:48.111: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:17:51.632: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:17:52.909: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:17:54.427: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:17:54.924: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:17:54.927: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:17:55.908: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:17:55.910: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:17:57.009: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:17:57.013: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:17:58.033: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:17:58.036: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:17:59.028: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:17:59.031: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:17:59.909: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:17:59.912: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:18:01.152: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:18:02.600: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:18:02.966: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:18:02.970: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:18:03.934: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:18:03.937: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:18:05.009: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:18:05.013: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:18:05.996: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:18:06.001: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:18:06.908: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:18:06.911: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:18:07.908: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:18:07.910: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:18:08.916: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:18:09.052: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:18:09.908: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:18:09.912: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:18:10.908: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:18:10.910: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:18:12.942: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:18:12.945: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:18:13.909: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:18:13.911: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:18:14.909: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:18:14.912: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:18:16.140: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:18:16.143: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:18:16.908: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:18:16.910: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:18:17.996: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:18:17.996: INFO: Pod daemon-set-znxn2 is not available
Feb 17 23:18:17.998: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:18:18.908: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:18:18.908: INFO: Pod daemon-set-znxn2 is not available
Feb 17 23:18:18.911: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:18:20.042: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:18:20.042: INFO: Pod daemon-set-znxn2 is not available
Feb 17 23:18:20.045: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:18:20.931: INFO: Wrong image for pod: daemon-set-znxn2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb 17 23:18:20.931: INFO: Pod daemon-set-znxn2 is not available
Feb 17 23:18:21.006: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:18:22.698: INFO: Pod daemon-set-ptds5 is not available
Feb 17 23:18:22.938: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
STEP: Check that daemon pods are still running on every node of the cluster.
Feb 17 23:18:22.941: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:18:25.172: INFO: Number of nodes with available pods: 1
Feb 17 23:18:25.172: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:18:26.628: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:18:26.631: INFO: Number of nodes with available pods: 1
Feb 17 23:18:26.631: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:18:27.177: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:18:27.179: INFO: Number of nodes with available pods: 1
Feb 17 23:18:27.179: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:18:28.744: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:18:29.404: INFO: Number of nodes with available pods: 1
Feb 17 23:18:29.404: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:18:30.426: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:18:30.629: INFO: Number of nodes with available pods: 1
Feb 17 23:18:30.629: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:18:32.094: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:18:32.097: INFO: Number of nodes with available pods: 1
Feb 17 23:18:32.097: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:18:32.578: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:18:32.581: INFO: Number of nodes with available pods: 1
Feb 17 23:18:32.581: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:18:33.176: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:18:33.178: INFO: Number of nodes with available pods: 1
Feb 17 23:18:33.178: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:18:34.301: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:18:34.303: INFO: Number of nodes with available pods: 1
Feb 17 23:18:34.303: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:18:35.944: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:18:36.722: INFO: Number of nodes with available pods: 1
Feb 17 23:18:36.722: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:18:37.177: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:18:37.179: INFO: Number of nodes with available pods: 1
Feb 17 23:18:37.179: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:18:38.408: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:18:38.410: INFO: Number of nodes with available pods: 1
Feb 17 23:18:38.410: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:18:39.528: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:18:39.662: INFO: Number of nodes with available pods: 1
Feb 17 23:18:39.662: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:18:40.374: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:18:40.377: INFO: Number of nodes with available pods: 1
Feb 17 23:18:40.377: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:18:41.176: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:18:41.179: INFO: Number of nodes with available pods: 1
Feb 17 23:18:41.179: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:18:42.205: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:18:42.207: INFO: Number of nodes with available pods: 1
Feb 17 23:18:42.207: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:18:43.176: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:18:43.179: INFO: Number of nodes with available pods: 1
Feb 17 23:18:43.179: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:18:44.259: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:18:44.262: INFO: Number of nodes with available pods: 1
Feb 17 23:18:44.262: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:18:45.176: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:18:45.178: INFO: Number of nodes with available pods: 1
Feb 17 23:18:45.178: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:18:48.890: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:18:49.303: INFO: Number of nodes with available pods: 1
Feb 17 23:18:49.303: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:18:50.944: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:18:50.947: INFO: Number of nodes with available pods: 1
Feb 17 23:18:50.947: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:18:51.176: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:18:51.178: INFO: Number of nodes with available pods: 1
Feb 17 23:18:51.178: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:18:52.177: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:18:52.179: INFO: Number of nodes with available pods: 1
Feb 17 23:18:52.180: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:18:53.176: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:18:53.178: INFO: Number of nodes with available pods: 1
Feb 17 23:18:53.178: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:18:54.331: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:18:54.335: INFO: Number of nodes with available pods: 1
Feb 17 23:18:54.335: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:18:55.178: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:18:55.182: INFO: Number of nodes with available pods: 1
Feb 17 23:18:55.182: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:18:56.178: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:18:56.181: INFO: Number of nodes with available pods: 1
Feb 17 23:18:56.181: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:18:57.176: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:18:57.180: INFO: Number of nodes with available pods: 1
Feb 17 23:18:57.180: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:18:58.901: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:18:59.057: INFO: Number of nodes with available pods: 1
Feb 17 23:18:59.057: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:19:00.077: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:19:00.122: INFO: Number of nodes with available pods: 1
Feb 17 23:19:00.122: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:19:00.292: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:19:00.296: INFO: Number of nodes with available pods: 1
Feb 17 23:19:00.296: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:19:01.177: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:19:01.180: INFO: Number of nodes with available pods: 1
Feb 17 23:19:01.180: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:19:03.100: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:19:03.273: INFO: Number of nodes with available pods: 1
Feb 17 23:19:03.273: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:19:04.237: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:19:04.240: INFO: Number of nodes with available pods: 1
Feb 17 23:19:04.240: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:19:05.352: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:19:05.768: INFO: Number of nodes with available pods: 1
Feb 17 23:19:05.768: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:19:06.984: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:19:06.987: INFO: Number of nodes with available pods: 1
Feb 17 23:19:06.987: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:19:07.448: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:19:07.499: INFO: Number of nodes with available pods: 1
Feb 17 23:19:07.499: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:19:08.224: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:19:08.333: INFO: Number of nodes with available pods: 1
Feb 17 23:19:08.333: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:19:09.422: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:19:10.844: INFO: Number of nodes with available pods: 1
Feb 17 23:19:10.844: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:19:11.208: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:19:11.211: INFO: Number of nodes with available pods: 1
Feb 17 23:19:11.211: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:19:12.177: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:19:12.181: INFO: Number of nodes with available pods: 1
Feb 17 23:19:12.181: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:19:13.426: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:19:13.430: INFO: Number of nodes with available pods: 1
Feb 17 23:19:13.430: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:19:14.177: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:19:14.181: INFO: Number of nodes with available pods: 1
Feb 17 23:19:14.181: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:19:15.177: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:19:15.180: INFO: Number of nodes with available pods: 1
Feb 17 23:19:15.180: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:19:16.263: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:19:16.525: INFO: Number of nodes with available pods: 1
Feb 17 23:19:16.526: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:19:17.177: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:19:17.181: INFO: Number of nodes with available pods: 1
Feb 17 23:19:17.181: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:19:18.700: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:19:18.932: INFO: Number of nodes with available pods: 1
Feb 17 23:19:18.932: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:19:19.489: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:19:22.105: INFO: Number of nodes with available pods: 1
Feb 17 23:19:22.105: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:19:23.552: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:19:23.555: INFO: Number of nodes with available pods: 1
Feb 17 23:19:23.555: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:19:24.177: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:19:24.181: INFO: Number of nodes with available pods: 1
Feb 17 23:19:24.181: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:19:25.177: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:19:25.180: INFO: Number of nodes with available pods: 1
Feb 17 23:19:25.180: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:19:26.256: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:19:26.259: INFO: Number of nodes with available pods: 1
Feb 17 23:19:26.259: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:19:27.176: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:19:27.178: INFO: Number of nodes with available pods: 1
Feb 17 23:19:27.178: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:19:28.242: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:19:28.782: INFO: Number of nodes with available pods: 1
Feb 17 23:19:28.782: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:19:29.177: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:19:29.180: INFO: Number of nodes with available pods: 1
Feb 17 23:19:29.180: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:19:31.044: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:19:31.048: INFO: Number of nodes with available pods: 1
Feb 17 23:19:31.048: INFO: Node iruya-worker is running more than one daemon pod
Feb 17 23:19:31.874: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 17 23:19:32.274: INFO: Number of nodes with available pods: 2
Feb 17 23:19:32.274: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8945, will wait for the garbage collector to delete the pods
Feb 17 23:19:32.592: INFO: Deleting DaemonSet.extensions daemon-set took: 4.355001ms
Feb 17 23:19:32.992: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.190787ms
Feb 17 23:19:44.621: INFO: Number of nodes with available pods: 0
Feb 17 23:19:44.621: INFO: Number of running nodes: 0, number of available pods: 0
Feb 17 23:19:44.625: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8945/daemonsets","resourceVersion":"6943439"},"items":null}

Feb 17 23:19:44.907: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8945/pods","resourceVersion":"6943441"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 23:19:45.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-8945" for this suite.
Feb 17 23:20:03.599: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 23:20:03.664: INFO: namespace daemonsets-8945 deletion completed in 18.519262229s

• [SLOW TEST:364.760 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 23:20:03.665: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-943cd436-d49d-4d37-b514-688bf5d604ba
STEP: Creating a pod to test consume configMaps
Feb 17 23:20:07.374: INFO: Waiting up to 5m0s for pod "pod-configmaps-fec70097-4eec-4872-acfd-18462958ea3c" in namespace "configmap-6428" to be "success or failure"
Feb 17 23:20:08.781: INFO: Pod "pod-configmaps-fec70097-4eec-4872-acfd-18462958ea3c": Phase="Pending", Reason="", readiness=false. Elapsed: 1.407234304s
Feb 17 23:20:11.140: INFO: Pod "pod-configmaps-fec70097-4eec-4872-acfd-18462958ea3c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.765585307s
Feb 17 23:20:14.653: INFO: Pod "pod-configmaps-fec70097-4eec-4872-acfd-18462958ea3c": Phase="Pending", Reason="", readiness=false. Elapsed: 7.278721157s
Feb 17 23:20:16.657: INFO: Pod "pod-configmaps-fec70097-4eec-4872-acfd-18462958ea3c": Phase="Pending", Reason="", readiness=false. Elapsed: 9.282755239s
Feb 17 23:20:18.661: INFO: Pod "pod-configmaps-fec70097-4eec-4872-acfd-18462958ea3c": Phase="Pending", Reason="", readiness=false. Elapsed: 11.286690912s
Feb 17 23:20:20.664: INFO: Pod "pod-configmaps-fec70097-4eec-4872-acfd-18462958ea3c": Phase="Pending", Reason="", readiness=false. Elapsed: 13.290019513s
Feb 17 23:20:23.489: INFO: Pod "pod-configmaps-fec70097-4eec-4872-acfd-18462958ea3c": Phase="Pending", Reason="", readiness=false. Elapsed: 16.11533499s
Feb 17 23:20:25.493: INFO: Pod "pod-configmaps-fec70097-4eec-4872-acfd-18462958ea3c": Phase="Pending", Reason="", readiness=false. Elapsed: 18.119321337s
Feb 17 23:20:27.498: INFO: Pod "pod-configmaps-fec70097-4eec-4872-acfd-18462958ea3c": Phase="Pending", Reason="", readiness=false. Elapsed: 20.123622724s
Feb 17 23:20:29.837: INFO: Pod "pod-configmaps-fec70097-4eec-4872-acfd-18462958ea3c": Phase="Pending", Reason="", readiness=false. Elapsed: 22.463113417s
Feb 17 23:20:31.841: INFO: Pod "pod-configmaps-fec70097-4eec-4872-acfd-18462958ea3c": Phase="Pending", Reason="", readiness=false. Elapsed: 24.466537962s
Feb 17 23:20:33.945: INFO: Pod "pod-configmaps-fec70097-4eec-4872-acfd-18462958ea3c": Phase="Pending", Reason="", readiness=false. Elapsed: 26.571108068s
Feb 17 23:20:35.948: INFO: Pod "pod-configmaps-fec70097-4eec-4872-acfd-18462958ea3c": Phase="Pending", Reason="", readiness=false. Elapsed: 28.573605919s
Feb 17 23:20:37.951: INFO: Pod "pod-configmaps-fec70097-4eec-4872-acfd-18462958ea3c": Phase="Pending", Reason="", readiness=false. Elapsed: 30.576500784s
Feb 17 23:20:39.954: INFO: Pod "pod-configmaps-fec70097-4eec-4872-acfd-18462958ea3c": Phase="Pending", Reason="", readiness=false. Elapsed: 32.580282868s
Feb 17 23:20:42.415: INFO: Pod "pod-configmaps-fec70097-4eec-4872-acfd-18462958ea3c": Phase="Pending", Reason="", readiness=false. Elapsed: 35.041166998s
Feb 17 23:20:44.419: INFO: Pod "pod-configmaps-fec70097-4eec-4872-acfd-18462958ea3c": Phase="Pending", Reason="", readiness=false. Elapsed: 37.044510907s
Feb 17 23:20:46.422: INFO: Pod "pod-configmaps-fec70097-4eec-4872-acfd-18462958ea3c": Phase="Running", Reason="", readiness=true. Elapsed: 39.047755903s
Feb 17 23:20:49.556: INFO: Pod "pod-configmaps-fec70097-4eec-4872-acfd-18462958ea3c": Phase="Running", Reason="", readiness=true. Elapsed: 42.181734832s
Feb 17 23:20:52.945: INFO: Pod "pod-configmaps-fec70097-4eec-4872-acfd-18462958ea3c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 45.570952523s
STEP: Saw pod success
Feb 17 23:20:52.945: INFO: Pod "pod-configmaps-fec70097-4eec-4872-acfd-18462958ea3c" satisfied condition "success or failure"
Feb 17 23:20:54.226: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-fec70097-4eec-4872-acfd-18462958ea3c container configmap-volume-test: 
STEP: delete the pod
Feb 17 23:20:56.101: INFO: Waiting for pod pod-configmaps-fec70097-4eec-4872-acfd-18462958ea3c to disappear
Feb 17 23:20:56.244: INFO: Pod pod-configmaps-fec70097-4eec-4872-acfd-18462958ea3c no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 23:20:56.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6428" for this suite.
Feb 17 23:21:19.963: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 23:21:20.032: INFO: namespace configmap-6428 deletion completed in 22.189717097s

• [SLOW TEST:76.367 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 23:21:20.032: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb 17 23:21:52.370: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 23:21:54.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-1976" for this suite.
Feb 17 23:22:08.893: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 23:22:10.881: INFO: namespace container-runtime-1976 deletion completed in 16.841807646s

• [SLOW TEST:50.848 seconds]
[k8s.io] Container Runtime
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 23:22:10.881: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 17 23:22:14.463: INFO: Waiting up to 5m0s for pod "downwardapi-volume-01ebc2b0-e2b1-46ed-bae4-c3205b51440a" in namespace "downward-api-6580" to be "success or failure"
Feb 17 23:22:15.783: INFO: Pod "downwardapi-volume-01ebc2b0-e2b1-46ed-bae4-c3205b51440a": Phase="Pending", Reason="", readiness=false. Elapsed: 1.320292175s
Feb 17 23:22:17.786: INFO: Pod "downwardapi-volume-01ebc2b0-e2b1-46ed-bae4-c3205b51440a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.322816382s
Feb 17 23:22:20.085: INFO: Pod "downwardapi-volume-01ebc2b0-e2b1-46ed-bae4-c3205b51440a": Phase="Pending", Reason="", readiness=false. Elapsed: 5.621985177s
Feb 17 23:22:22.219: INFO: Pod "downwardapi-volume-01ebc2b0-e2b1-46ed-bae4-c3205b51440a": Phase="Pending", Reason="", readiness=false. Elapsed: 7.755963976s
Feb 17 23:22:24.223: INFO: Pod "downwardapi-volume-01ebc2b0-e2b1-46ed-bae4-c3205b51440a": Phase="Pending", Reason="", readiness=false. Elapsed: 9.760210888s
Feb 17 23:22:26.228: INFO: Pod "downwardapi-volume-01ebc2b0-e2b1-46ed-bae4-c3205b51440a": Phase="Pending", Reason="", readiness=false. Elapsed: 11.764671311s
Feb 17 23:22:28.232: INFO: Pod "downwardapi-volume-01ebc2b0-e2b1-46ed-bae4-c3205b51440a": Phase="Pending", Reason="", readiness=false. Elapsed: 13.768456576s
Feb 17 23:22:30.342: INFO: Pod "downwardapi-volume-01ebc2b0-e2b1-46ed-bae4-c3205b51440a": Phase="Pending", Reason="", readiness=false. Elapsed: 15.878841331s
Feb 17 23:22:32.347: INFO: Pod "downwardapi-volume-01ebc2b0-e2b1-46ed-bae4-c3205b51440a": Phase="Pending", Reason="", readiness=false. Elapsed: 17.883491998s
Feb 17 23:22:34.408: INFO: Pod "downwardapi-volume-01ebc2b0-e2b1-46ed-bae4-c3205b51440a": Phase="Pending", Reason="", readiness=false. Elapsed: 19.944531151s
Feb 17 23:22:36.438: INFO: Pod "downwardapi-volume-01ebc2b0-e2b1-46ed-bae4-c3205b51440a": Phase="Pending", Reason="", readiness=false. Elapsed: 21.974904594s
Feb 17 23:22:38.466: INFO: Pod "downwardapi-volume-01ebc2b0-e2b1-46ed-bae4-c3205b51440a": Phase="Pending", Reason="", readiness=false. Elapsed: 24.00311702s
Feb 17 23:22:41.652: INFO: Pod "downwardapi-volume-01ebc2b0-e2b1-46ed-bae4-c3205b51440a": Phase="Pending", Reason="", readiness=false. Elapsed: 27.188407314s
Feb 17 23:22:43.655: INFO: Pod "downwardapi-volume-01ebc2b0-e2b1-46ed-bae4-c3205b51440a": Phase="Pending", Reason="", readiness=false. Elapsed: 29.192108354s
Feb 17 23:22:45.659: INFO: Pod "downwardapi-volume-01ebc2b0-e2b1-46ed-bae4-c3205b51440a": Phase="Pending", Reason="", readiness=false. Elapsed: 31.195642071s
Feb 17 23:22:47.662: INFO: Pod "downwardapi-volume-01ebc2b0-e2b1-46ed-bae4-c3205b51440a": Phase="Pending", Reason="", readiness=false. Elapsed: 33.19895964s
Feb 17 23:22:50.216: INFO: Pod "downwardapi-volume-01ebc2b0-e2b1-46ed-bae4-c3205b51440a": Phase="Pending", Reason="", readiness=false. Elapsed: 35.752546079s
Feb 17 23:22:53.224: INFO: Pod "downwardapi-volume-01ebc2b0-e2b1-46ed-bae4-c3205b51440a": Phase="Pending", Reason="", readiness=false. Elapsed: 38.760925979s
Feb 17 23:22:56.054: INFO: Pod "downwardapi-volume-01ebc2b0-e2b1-46ed-bae4-c3205b51440a": Phase="Pending", Reason="", readiness=false. Elapsed: 41.590799918s
Feb 17 23:22:58.174: INFO: Pod "downwardapi-volume-01ebc2b0-e2b1-46ed-bae4-c3205b51440a": Phase="Pending", Reason="", readiness=false. Elapsed: 43.710849984s
Feb 17 23:23:01.734: INFO: Pod "downwardapi-volume-01ebc2b0-e2b1-46ed-bae4-c3205b51440a": Phase="Pending", Reason="", readiness=false. Elapsed: 47.270345303s
Feb 17 23:23:03.784: INFO: Pod "downwardapi-volume-01ebc2b0-e2b1-46ed-bae4-c3205b51440a": Phase="Pending", Reason="", readiness=false. Elapsed: 49.321114978s
Feb 17 23:23:05.942: INFO: Pod "downwardapi-volume-01ebc2b0-e2b1-46ed-bae4-c3205b51440a": Phase="Pending", Reason="", readiness=false. Elapsed: 51.479187001s
Feb 17 23:23:10.642: INFO: Pod "downwardapi-volume-01ebc2b0-e2b1-46ed-bae4-c3205b51440a": Phase="Pending", Reason="", readiness=false. Elapsed: 56.179025913s
Feb 17 23:23:15.106: INFO: Pod "downwardapi-volume-01ebc2b0-e2b1-46ed-bae4-c3205b51440a": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.642879533s
Feb 17 23:23:18.453: INFO: Pod "downwardapi-volume-01ebc2b0-e2b1-46ed-bae4-c3205b51440a": Phase="Running", Reason="", readiness=true. Elapsed: 1m3.989365795s
Feb 17 23:23:20.458: INFO: Pod "downwardapi-volume-01ebc2b0-e2b1-46ed-bae4-c3205b51440a": Phase="Running", Reason="", readiness=true. Elapsed: 1m5.994787314s
Feb 17 23:23:22.461: INFO: Pod "downwardapi-volume-01ebc2b0-e2b1-46ed-bae4-c3205b51440a": Phase="Running", Reason="", readiness=true. Elapsed: 1m7.99822425s
Feb 17 23:23:24.465: INFO: Pod "downwardapi-volume-01ebc2b0-e2b1-46ed-bae4-c3205b51440a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m10.002236763s
STEP: Saw pod success
Feb 17 23:23:24.465: INFO: Pod "downwardapi-volume-01ebc2b0-e2b1-46ed-bae4-c3205b51440a" satisfied condition "success or failure"
Feb 17 23:23:24.468: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-01ebc2b0-e2b1-46ed-bae4-c3205b51440a container client-container: 
STEP: delete the pod
Feb 17 23:23:25.269: INFO: Waiting for pod downwardapi-volume-01ebc2b0-e2b1-46ed-bae4-c3205b51440a to disappear
Feb 17 23:23:25.654: INFO: Pod downwardapi-volume-01ebc2b0-e2b1-46ed-bae4-c3205b51440a no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 23:23:25.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6580" for this suite.
Feb 17 23:23:43.013: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 23:23:43.253: INFO: namespace downward-api-6580 deletion completed in 17.594025497s

• [SLOW TEST:92.372 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 23:23:43.254: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-189
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb 17 23:23:45.113: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb 17 23:24:37.040: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.143:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-189 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 17 23:24:37.041: INFO: >>> kubeConfig: /root/.kube/config
Feb 17 23:24:37.991: INFO: Found all expected endpoints: [netserver-0]
Feb 17 23:24:38.267: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.44:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-189 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 17 23:24:38.267: INFO: >>> kubeConfig: /root/.kube/config
Feb 17 23:24:38.504: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 23:24:38.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-189" for this suite.
Feb 17 23:25:13.352: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 23:25:15.205: INFO: namespace pod-network-test-189 deletion completed in 36.357249436s

• [SLOW TEST:91.951 seconds]
[sig-network] Networking
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 23:25:15.205: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Feb 17 23:25:16.679: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9356,SelfLink:/api/v1/namespaces/watch-9356/configmaps/e2e-watch-test-configmap-a,UID:f2c29f33-7b7f-4016-9775-0d1dccc0b653,ResourceVersion:6944136,Generation:0,CreationTimestamp:2021-02-17 23:25:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb 17 23:25:16.679: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9356,SelfLink:/api/v1/namespaces/watch-9356/configmaps/e2e-watch-test-configmap-a,UID:f2c29f33-7b7f-4016-9775-0d1dccc0b653,ResourceVersion:6944136,Generation:0,CreationTimestamp:2021-02-17 23:25:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Feb 17 23:25:26.685: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9356,SelfLink:/api/v1/namespaces/watch-9356/configmaps/e2e-watch-test-configmap-a,UID:f2c29f33-7b7f-4016-9775-0d1dccc0b653,ResourceVersion:6944155,Generation:0,CreationTimestamp:2021-02-17 23:25:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Feb 17 23:25:26.685: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9356,SelfLink:/api/v1/namespaces/watch-9356/configmaps/e2e-watch-test-configmap-a,UID:f2c29f33-7b7f-4016-9775-0d1dccc0b653,ResourceVersion:6944155,Generation:0,CreationTimestamp:2021-02-17 23:25:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Feb 17 23:25:36.694: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9356,SelfLink:/api/v1/namespaces/watch-9356/configmaps/e2e-watch-test-configmap-a,UID:f2c29f33-7b7f-4016-9775-0d1dccc0b653,ResourceVersion:6944173,Generation:0,CreationTimestamp:2021-02-17 23:25:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb 17 23:25:36.694: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9356,SelfLink:/api/v1/namespaces/watch-9356/configmaps/e2e-watch-test-configmap-a,UID:f2c29f33-7b7f-4016-9775-0d1dccc0b653,ResourceVersion:6944173,Generation:0,CreationTimestamp:2021-02-17 23:25:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Feb 17 23:25:46.701: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9356,SelfLink:/api/v1/namespaces/watch-9356/configmaps/e2e-watch-test-configmap-a,UID:f2c29f33-7b7f-4016-9775-0d1dccc0b653,ResourceVersion:6944191,Generation:0,CreationTimestamp:2021-02-17 23:25:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb 17 23:25:46.701: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9356,SelfLink:/api/v1/namespaces/watch-9356/configmaps/e2e-watch-test-configmap-a,UID:f2c29f33-7b7f-4016-9775-0d1dccc0b653,ResourceVersion:6944191,Generation:0,CreationTimestamp:2021-02-17 23:25:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Feb 17 23:25:56.708: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-9356,SelfLink:/api/v1/namespaces/watch-9356/configmaps/e2e-watch-test-configmap-b,UID:9ee51d0f-1a84-4dea-920a-c313a5223e3e,ResourceVersion:6944209,Generation:0,CreationTimestamp:2021-02-17 23:25:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb 17 23:25:56.708: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-9356,SelfLink:/api/v1/namespaces/watch-9356/configmaps/e2e-watch-test-configmap-b,UID:9ee51d0f-1a84-4dea-920a-c313a5223e3e,ResourceVersion:6944209,Generation:0,CreationTimestamp:2021-02-17 23:25:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Feb 17 23:26:06.850: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-9356,SelfLink:/api/v1/namespaces/watch-9356/configmaps/e2e-watch-test-configmap-b,UID:9ee51d0f-1a84-4dea-920a-c313a5223e3e,ResourceVersion:6944227,Generation:0,CreationTimestamp:2021-02-17 23:25:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb 17 23:26:06.850: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-9356,SelfLink:/api/v1/namespaces/watch-9356/configmaps/e2e-watch-test-configmap-b,UID:9ee51d0f-1a84-4dea-920a-c313a5223e3e,ResourceVersion:6944227,Generation:0,CreationTimestamp:2021-02-17 23:25:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 23:26:16.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-9356" for this suite.
Feb 17 23:26:27.587: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 23:26:27.664: INFO: namespace watch-9356 deletion completed in 10.433121296s

• [SLOW TEST:72.459 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 23:26:27.665: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 23:26:49.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-8575" for this suite.
Feb 17 23:27:34.225: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 23:27:34.284: INFO: namespace replication-controller-8575 deletion completed in 44.308002622s

• [SLOW TEST:66.619 seconds]
[sig-apps] ReplicationController
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 23:27:34.284: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 23:28:28.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-135" for this suite.
Feb 17 23:30:52.114: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 23:30:52.179: INFO: namespace kubelet-test-135 deletion completed in 2m24.158576025s

• [SLOW TEST:197.895 seconds]
[k8s.io] Kubelet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command in a pod
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 23:30:52.180: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-9148f8a3-99d0-414e-a3f4-40c4bad29775
STEP: Creating secret with name s-test-opt-upd-983e7569-c32e-4b41-beb8-b1b86a862736
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-9148f8a3-99d0-414e-a3f4-40c4bad29775
STEP: Updating secret s-test-opt-upd-983e7569-c32e-4b41-beb8-b1b86a862736
STEP: Creating secret with name s-test-opt-create-1a007ae3-ca18-446c-bf5b-42aeda88a114
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 23:34:00.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9306" for this suite.
Feb 17 23:35:14.907: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 23:35:14.980: INFO: namespace secrets-9306 deletion completed in 1m14.327216158s

• [SLOW TEST:262.800 seconds]
[sig-storage] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 23:35:14.981: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 23:35:27.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-8675" for this suite.
Feb 17 23:35:38.996: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 23:35:39.547: INFO: namespace namespaces-8675 deletion completed in 11.405442506s
STEP: Destroying namespace "nsdeletetest-333" for this suite.
Feb 17 23:35:39.549: INFO: Namespace nsdeletetest-333 was already deleted
STEP: Destroying namespace "nsdeletetest-637" for this suite.
Feb 17 23:35:47.691: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 23:35:47.757: INFO: namespace nsdeletetest-637 deletion completed in 8.207772607s

• [SLOW TEST:32.776 seconds]
[sig-api-machinery] Namespaces [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 23:35:47.757: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Feb 17 23:35:49.304: INFO: Waiting up to 5m0s for pod "downward-api-bda20f34-bcb9-4880-b4bc-719f53e4b997" in namespace "downward-api-5219" to be "success or failure"
Feb 17 23:35:49.355: INFO: Pod "downward-api-bda20f34-bcb9-4880-b4bc-719f53e4b997": Phase="Pending", Reason="", readiness=false. Elapsed: 51.842647ms
Feb 17 23:35:51.358: INFO: Pod "downward-api-bda20f34-bcb9-4880-b4bc-719f53e4b997": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054491095s
Feb 17 23:35:53.470: INFO: Pod "downward-api-bda20f34-bcb9-4880-b4bc-719f53e4b997": Phase="Pending", Reason="", readiness=false. Elapsed: 4.166355019s
Feb 17 23:35:55.473: INFO: Pod "downward-api-bda20f34-bcb9-4880-b4bc-719f53e4b997": Phase="Pending", Reason="", readiness=false. Elapsed: 6.169872606s
Feb 17 23:35:57.476: INFO: Pod "downward-api-bda20f34-bcb9-4880-b4bc-719f53e4b997": Phase="Pending", Reason="", readiness=false. Elapsed: 8.172375527s
Feb 17 23:35:59.578: INFO: Pod "downward-api-bda20f34-bcb9-4880-b4bc-719f53e4b997": Phase="Pending", Reason="", readiness=false. Elapsed: 10.274529394s
Feb 17 23:36:01.632: INFO: Pod "downward-api-bda20f34-bcb9-4880-b4bc-719f53e4b997": Phase="Pending", Reason="", readiness=false. Elapsed: 12.328032682s
Feb 17 23:36:04.207: INFO: Pod "downward-api-bda20f34-bcb9-4880-b4bc-719f53e4b997": Phase="Pending", Reason="", readiness=false. Elapsed: 14.9037179s
Feb 17 23:36:06.536: INFO: Pod "downward-api-bda20f34-bcb9-4880-b4bc-719f53e4b997": Phase="Pending", Reason="", readiness=false. Elapsed: 17.232346034s
Feb 17 23:36:09.639: INFO: Pod "downward-api-bda20f34-bcb9-4880-b4bc-719f53e4b997": Phase="Pending", Reason="", readiness=false. Elapsed: 20.335656572s
Feb 17 23:36:11.994: INFO: Pod "downward-api-bda20f34-bcb9-4880-b4bc-719f53e4b997": Phase="Pending", Reason="", readiness=false. Elapsed: 22.690355318s
Feb 17 23:36:14.440: INFO: Pod "downward-api-bda20f34-bcb9-4880-b4bc-719f53e4b997": Phase="Pending", Reason="", readiness=false. Elapsed: 25.136830684s
Feb 17 23:36:16.789: INFO: Pod "downward-api-bda20f34-bcb9-4880-b4bc-719f53e4b997": Phase="Pending", Reason="", readiness=false. Elapsed: 27.484944236s
Feb 17 23:36:18.792: INFO: Pod "downward-api-bda20f34-bcb9-4880-b4bc-719f53e4b997": Phase="Pending", Reason="", readiness=false. Elapsed: 29.487930129s
Feb 17 23:36:20.795: INFO: Pod "downward-api-bda20f34-bcb9-4880-b4bc-719f53e4b997": Phase="Pending", Reason="", readiness=false. Elapsed: 31.491007172s
Feb 17 23:36:22.798: INFO: Pod "downward-api-bda20f34-bcb9-4880-b4bc-719f53e4b997": Phase="Pending", Reason="", readiness=false. Elapsed: 33.494380648s
Feb 17 23:36:24.801: INFO: Pod "downward-api-bda20f34-bcb9-4880-b4bc-719f53e4b997": Phase="Pending", Reason="", readiness=false. Elapsed: 35.497456418s
Feb 17 23:36:26.804: INFO: Pod "downward-api-bda20f34-bcb9-4880-b4bc-719f53e4b997": Phase="Pending", Reason="", readiness=false. Elapsed: 37.500805125s
Feb 17 23:36:28.809: INFO: Pod "downward-api-bda20f34-bcb9-4880-b4bc-719f53e4b997": Phase="Pending", Reason="", readiness=false. Elapsed: 39.505297986s
Feb 17 23:36:30.812: INFO: Pod "downward-api-bda20f34-bcb9-4880-b4bc-719f53e4b997": Phase="Pending", Reason="", readiness=false. Elapsed: 41.508522673s
Feb 17 23:36:32.817: INFO: Pod "downward-api-bda20f34-bcb9-4880-b4bc-719f53e4b997": Phase="Pending", Reason="", readiness=false. Elapsed: 43.513010682s
Feb 17 23:36:34.910: INFO: Pod "downward-api-bda20f34-bcb9-4880-b4bc-719f53e4b997": Phase="Pending", Reason="", readiness=false. Elapsed: 45.606763264s
Feb 17 23:36:36.915: INFO: Pod "downward-api-bda20f34-bcb9-4880-b4bc-719f53e4b997": Phase="Pending", Reason="", readiness=false. Elapsed: 47.610951515s
Feb 17 23:36:39.381: INFO: Pod "downward-api-bda20f34-bcb9-4880-b4bc-719f53e4b997": Phase="Pending", Reason="", readiness=false. Elapsed: 50.077434294s
Feb 17 23:36:41.385: INFO: Pod "downward-api-bda20f34-bcb9-4880-b4bc-719f53e4b997": Phase="Pending", Reason="", readiness=false. Elapsed: 52.081462447s
Feb 17 23:36:43.389: INFO: Pod "downward-api-bda20f34-bcb9-4880-b4bc-719f53e4b997": Phase="Pending", Reason="", readiness=false. Elapsed: 54.084931053s
Feb 17 23:36:45.392: INFO: Pod "downward-api-bda20f34-bcb9-4880-b4bc-719f53e4b997": Phase="Pending", Reason="", readiness=false. Elapsed: 56.088236387s
Feb 17 23:36:47.396: INFO: Pod "downward-api-bda20f34-bcb9-4880-b4bc-719f53e4b997": Phase="Pending", Reason="", readiness=false. Elapsed: 58.092588047s
Feb 17 23:36:49.399: INFO: Pod "downward-api-bda20f34-bcb9-4880-b4bc-719f53e4b997": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.095807788s
Feb 17 23:36:51.404: INFO: Pod "downward-api-bda20f34-bcb9-4880-b4bc-719f53e4b997": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.100048201s
Feb 17 23:36:54.406: INFO: Pod "downward-api-bda20f34-bcb9-4880-b4bc-719f53e4b997": Phase="Pending", Reason="", readiness=false. Elapsed: 1m5.102404156s
Feb 17 23:36:56.410: INFO: Pod "downward-api-bda20f34-bcb9-4880-b4bc-719f53e4b997": Phase="Pending", Reason="", readiness=false. Elapsed: 1m7.106807099s
Feb 17 23:36:59.011: INFO: Pod "downward-api-bda20f34-bcb9-4880-b4bc-719f53e4b997": Phase="Pending", Reason="", readiness=false. Elapsed: 1m9.707665768s
Feb 17 23:37:01.016: INFO: Pod "downward-api-bda20f34-bcb9-4880-b4bc-719f53e4b997": Phase="Pending", Reason="", readiness=false. Elapsed: 1m11.71216978s
Feb 17 23:37:03.064: INFO: Pod "downward-api-bda20f34-bcb9-4880-b4bc-719f53e4b997": Phase="Pending", Reason="", readiness=false. Elapsed: 1m13.760193101s
Feb 17 23:37:05.316: INFO: Pod "downward-api-bda20f34-bcb9-4880-b4bc-719f53e4b997": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.012148152s
Feb 17 23:37:07.319: INFO: Pod "downward-api-bda20f34-bcb9-4880-b4bc-719f53e4b997": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.015442206s
Feb 17 23:37:09.323: INFO: Pod "downward-api-bda20f34-bcb9-4880-b4bc-719f53e4b997": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.019081484s
Feb 17 23:37:11.694: INFO: Pod "downward-api-bda20f34-bcb9-4880-b4bc-719f53e4b997": Phase="Running", Reason="", readiness=true. Elapsed: 1m22.389989479s
Feb 17 23:37:13.698: INFO: Pod "downward-api-bda20f34-bcb9-4880-b4bc-719f53e4b997": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m24.394627201s
STEP: Saw pod success
Feb 17 23:37:13.698: INFO: Pod "downward-api-bda20f34-bcb9-4880-b4bc-719f53e4b997" satisfied condition "success or failure"
Feb 17 23:37:13.735: INFO: Trying to get logs from node iruya-worker2 pod downward-api-bda20f34-bcb9-4880-b4bc-719f53e4b997 container dapi-container: 
STEP: delete the pod
Feb 17 23:37:16.050: INFO: Waiting for pod downward-api-bda20f34-bcb9-4880-b4bc-719f53e4b997 to disappear
Feb 17 23:37:16.969: INFO: Pod downward-api-bda20f34-bcb9-4880-b4bc-719f53e4b997 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 23:37:16.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5219" for this suite.
Feb 17 23:37:23.052: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 23:37:23.209: INFO: namespace downward-api-5219 deletion completed in 6.236508651s

• [SLOW TEST:95.452 seconds]
[sig-node] Downward API
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 23:37:23.209: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-898492fe-bcb5-47c4-a00e-2883d1209191
STEP: Creating a pod to test consume secrets
Feb 17 23:37:24.551: INFO: Waiting up to 5m0s for pod "pod-secrets-2b058c30-6c38-4ae3-a911-2a45c664241c" in namespace "secrets-7415" to be "success or failure"
Feb 17 23:37:24.663: INFO: Pod "pod-secrets-2b058c30-6c38-4ae3-a911-2a45c664241c": Phase="Pending", Reason="", readiness=false. Elapsed: 111.876238ms
Feb 17 23:37:28.327: INFO: Pod "pod-secrets-2b058c30-6c38-4ae3-a911-2a45c664241c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.776538564s
Feb 17 23:37:30.330: INFO: Pod "pod-secrets-2b058c30-6c38-4ae3-a911-2a45c664241c": Phase="Pending", Reason="", readiness=false. Elapsed: 5.779339799s
Feb 17 23:37:32.333: INFO: Pod "pod-secrets-2b058c30-6c38-4ae3-a911-2a45c664241c": Phase="Pending", Reason="", readiness=false. Elapsed: 7.782527757s
Feb 17 23:37:34.336: INFO: Pod "pod-secrets-2b058c30-6c38-4ae3-a911-2a45c664241c": Phase="Pending", Reason="", readiness=false. Elapsed: 9.785307913s
Feb 17 23:37:36.339: INFO: Pod "pod-secrets-2b058c30-6c38-4ae3-a911-2a45c664241c": Phase="Pending", Reason="", readiness=false. Elapsed: 11.787901464s
Feb 17 23:37:38.393: INFO: Pod "pod-secrets-2b058c30-6c38-4ae3-a911-2a45c664241c": Phase="Pending", Reason="", readiness=false. Elapsed: 13.842372449s
Feb 17 23:37:40.465: INFO: Pod "pod-secrets-2b058c30-6c38-4ae3-a911-2a45c664241c": Phase="Pending", Reason="", readiness=false. Elapsed: 15.91422665s
Feb 17 23:37:42.468: INFO: Pod "pod-secrets-2b058c30-6c38-4ae3-a911-2a45c664241c": Phase="Pending", Reason="", readiness=false. Elapsed: 17.917235703s
Feb 17 23:37:44.615: INFO: Pod "pod-secrets-2b058c30-6c38-4ae3-a911-2a45c664241c": Phase="Pending", Reason="", readiness=false. Elapsed: 20.064151293s
Feb 17 23:37:46.732: INFO: Pod "pod-secrets-2b058c30-6c38-4ae3-a911-2a45c664241c": Phase="Pending", Reason="", readiness=false. Elapsed: 22.180896001s
Feb 17 23:37:48.844: INFO: Pod "pod-secrets-2b058c30-6c38-4ae3-a911-2a45c664241c": Phase="Pending", Reason="", readiness=false. Elapsed: 24.293078995s
Feb 17 23:37:50.847: INFO: Pod "pod-secrets-2b058c30-6c38-4ae3-a911-2a45c664241c": Phase="Pending", Reason="", readiness=false. Elapsed: 26.295959576s
Feb 17 23:37:53.714: INFO: Pod "pod-secrets-2b058c30-6c38-4ae3-a911-2a45c664241c": Phase="Pending", Reason="", readiness=false. Elapsed: 29.163478837s
Feb 17 23:37:55.718: INFO: Pod "pod-secrets-2b058c30-6c38-4ae3-a911-2a45c664241c": Phase="Pending", Reason="", readiness=false. Elapsed: 31.166951678s
Feb 17 23:37:57.736: INFO: Pod "pod-secrets-2b058c30-6c38-4ae3-a911-2a45c664241c": Phase="Pending", Reason="", readiness=false. Elapsed: 33.185599916s
Feb 17 23:38:00.111: INFO: Pod "pod-secrets-2b058c30-6c38-4ae3-a911-2a45c664241c": Phase="Pending", Reason="", readiness=false. Elapsed: 35.560023996s
Feb 17 23:38:02.354: INFO: Pod "pod-secrets-2b058c30-6c38-4ae3-a911-2a45c664241c": Phase="Pending", Reason="", readiness=false. Elapsed: 37.803155961s
Feb 17 23:38:06.016: INFO: Pod "pod-secrets-2b058c30-6c38-4ae3-a911-2a45c664241c": Phase="Pending", Reason="", readiness=false. Elapsed: 41.464898903s
Feb 17 23:38:08.460: INFO: Pod "pod-secrets-2b058c30-6c38-4ae3-a911-2a45c664241c": Phase="Pending", Reason="", readiness=false. Elapsed: 43.909297778s
Feb 17 23:38:10.463: INFO: Pod "pod-secrets-2b058c30-6c38-4ae3-a911-2a45c664241c": Phase="Pending", Reason="", readiness=false. Elapsed: 45.912402287s
Feb 17 23:38:12.633: INFO: Pod "pod-secrets-2b058c30-6c38-4ae3-a911-2a45c664241c": Phase="Pending", Reason="", readiness=false. Elapsed: 48.082156032s
Feb 17 23:38:14.635: INFO: Pod "pod-secrets-2b058c30-6c38-4ae3-a911-2a45c664241c": Phase="Pending", Reason="", readiness=false. Elapsed: 50.08443113s
Feb 17 23:38:16.710: INFO: Pod "pod-secrets-2b058c30-6c38-4ae3-a911-2a45c664241c": Phase="Pending", Reason="", readiness=false. Elapsed: 52.159548226s
Feb 17 23:38:18.765: INFO: Pod "pod-secrets-2b058c30-6c38-4ae3-a911-2a45c664241c": Phase="Pending", Reason="", readiness=false. Elapsed: 54.214114978s
Feb 17 23:38:20.768: INFO: Pod "pod-secrets-2b058c30-6c38-4ae3-a911-2a45c664241c": Phase="Pending", Reason="", readiness=false. Elapsed: 56.217168826s
Feb 17 23:38:22.771: INFO: Pod "pod-secrets-2b058c30-6c38-4ae3-a911-2a45c664241c": Phase="Pending", Reason="", readiness=false. Elapsed: 58.220288364s
Feb 17 23:38:25.070: INFO: Pod "pod-secrets-2b058c30-6c38-4ae3-a911-2a45c664241c": Phase="Pending", Reason="", readiness=false. Elapsed: 1m0.519675484s
Feb 17 23:38:27.142: INFO: Pod "pod-secrets-2b058c30-6c38-4ae3-a911-2a45c664241c": Phase="Pending", Reason="", readiness=false. Elapsed: 1m2.591378895s
Feb 17 23:38:30.953: INFO: Pod "pod-secrets-2b058c30-6c38-4ae3-a911-2a45c664241c": Phase="Pending", Reason="", readiness=false. Elapsed: 1m6.402478553s
Feb 17 23:38:32.956: INFO: Pod "pod-secrets-2b058c30-6c38-4ae3-a911-2a45c664241c": Phase="Pending", Reason="", readiness=false. Elapsed: 1m8.40524341s
Feb 17 23:38:35.676: INFO: Pod "pod-secrets-2b058c30-6c38-4ae3-a911-2a45c664241c": Phase="Pending", Reason="", readiness=false. Elapsed: 1m11.124891483s
Feb 17 23:38:37.680: INFO: Pod "pod-secrets-2b058c30-6c38-4ae3-a911-2a45c664241c": Phase="Pending", Reason="", readiness=false. Elapsed: 1m13.128681838s
Feb 17 23:38:40.887: INFO: Pod "pod-secrets-2b058c30-6c38-4ae3-a911-2a45c664241c": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.33582478s
Feb 17 23:38:43.874: INFO: Pod "pod-secrets-2b058c30-6c38-4ae3-a911-2a45c664241c": Phase="Pending", Reason="", readiness=false. Elapsed: 1m19.32283674s
Feb 17 23:38:45.877: INFO: Pod "pod-secrets-2b058c30-6c38-4ae3-a911-2a45c664241c": Phase="Pending", Reason="", readiness=false. Elapsed: 1m21.326082994s
Feb 17 23:38:47.881: INFO: Pod "pod-secrets-2b058c30-6c38-4ae3-a911-2a45c664241c": Phase="Pending", Reason="", readiness=false. Elapsed: 1m23.329852491s
Feb 17 23:38:50.053: INFO: Pod "pod-secrets-2b058c30-6c38-4ae3-a911-2a45c664241c": Phase="Pending", Reason="", readiness=false. Elapsed: 1m25.501848319s
Feb 17 23:38:52.604: INFO: Pod "pod-secrets-2b058c30-6c38-4ae3-a911-2a45c664241c": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.053256914s
Feb 17 23:38:55.056: INFO: Pod "pod-secrets-2b058c30-6c38-4ae3-a911-2a45c664241c": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.504923724s
Feb 17 23:38:57.060: INFO: Pod "pod-secrets-2b058c30-6c38-4ae3-a911-2a45c664241c": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.508693466s
Feb 17 23:38:59.063: INFO: Pod "pod-secrets-2b058c30-6c38-4ae3-a911-2a45c664241c": Phase="Pending", Reason="", readiness=false. Elapsed: 1m34.512266252s
Feb 17 23:39:01.563: INFO: Pod "pod-secrets-2b058c30-6c38-4ae3-a911-2a45c664241c": Phase="Pending", Reason="", readiness=false. Elapsed: 1m37.011966567s
Feb 17 23:39:04.521: INFO: Pod "pod-secrets-2b058c30-6c38-4ae3-a911-2a45c664241c": Phase="Pending", Reason="", readiness=false. Elapsed: 1m39.970461512s
Feb 17 23:39:07.344: INFO: Pod "pod-secrets-2b058c30-6c38-4ae3-a911-2a45c664241c": Phase="Pending", Reason="", readiness=false. Elapsed: 1m42.793514917s
Feb 17 23:39:09.526: INFO: Pod "pod-secrets-2b058c30-6c38-4ae3-a911-2a45c664241c": Phase="Pending", Reason="", readiness=false. Elapsed: 1m44.975653397s
Feb 17 23:39:11.531: INFO: Pod "pod-secrets-2b058c30-6c38-4ae3-a911-2a45c664241c": Phase="Pending", Reason="", readiness=false. Elapsed: 1m46.979765636s
Feb 17 23:39:13.603: INFO: Pod "pod-secrets-2b058c30-6c38-4ae3-a911-2a45c664241c": Phase="Pending", Reason="", readiness=false. Elapsed: 1m49.051869766s
Feb 17 23:39:17.008: INFO: Pod "pod-secrets-2b058c30-6c38-4ae3-a911-2a45c664241c": Phase="Pending", Reason="", readiness=false. Elapsed: 1m52.45732152s
Feb 17 23:39:19.180: INFO: Pod "pod-secrets-2b058c30-6c38-4ae3-a911-2a45c664241c": Phase="Running", Reason="", readiness=true. Elapsed: 1m54.62892977s
Feb 17 23:39:21.184: INFO: Pod "pod-secrets-2b058c30-6c38-4ae3-a911-2a45c664241c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m56.632971001s
STEP: Saw pod success
Feb 17 23:39:21.184: INFO: Pod "pod-secrets-2b058c30-6c38-4ae3-a911-2a45c664241c" satisfied condition "success or failure"
Feb 17 23:39:21.186: INFO: Trying to get logs from node iruya-worker pod pod-secrets-2b058c30-6c38-4ae3-a911-2a45c664241c container secret-volume-test: 
STEP: delete the pod
Feb 17 23:39:21.534: INFO: Waiting for pod pod-secrets-2b058c30-6c38-4ae3-a911-2a45c664241c to disappear
Feb 17 23:39:21.574: INFO: Pod pod-secrets-2b058c30-6c38-4ae3-a911-2a45c664241c no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 23:39:21.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7415" for this suite.
Feb 17 23:39:29.712: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 23:39:29.796: INFO: namespace secrets-7415 deletion completed in 8.218050994s

• [SLOW TEST:126.587 seconds]
[sig-storage] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 23:39:29.797: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Feb 17 23:39:44.665: INFO: Successfully updated pod "labelsupdate2d8317b8-4bf3-4516-a37a-85832811a426"
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 23:39:46.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6980" for this suite.
Feb 17 23:40:27.684: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 23:40:28.016: INFO: namespace downward-api-6980 deletion completed in 41.020672552s

• [SLOW TEST:58.219 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 23:40:28.016: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run job
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 17 23:40:28.756: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-9516'
Feb 17 23:41:30.115: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 17 23:41:30.115: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617
Feb 17 23:41:30.135: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-9516'
Feb 17 23:41:30.264: INFO: stderr: ""
Feb 17 23:41:30.264: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 23:41:30.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9516" for this suite.
Feb 17 23:41:56.305: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 23:41:56.383: INFO: namespace kubectl-9516 deletion completed in 26.116212255s

• [SLOW TEST:88.367 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run job
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image when restart is OnFailure  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 23:41:56.384: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 17 23:42:02.505: INFO: Waiting up to 5m0s for pod "downwardapi-volume-37402dbf-ab12-48f3-84a0-6701e3f42059" in namespace "projected-6913" to be "success or failure"
Feb 17 23:42:02.521: INFO: Pod "downwardapi-volume-37402dbf-ab12-48f3-84a0-6701e3f42059": Phase="Pending", Reason="", readiness=false. Elapsed: 16.133812ms
Feb 17 23:42:06.616: INFO: Pod "downwardapi-volume-37402dbf-ab12-48f3-84a0-6701e3f42059": Phase="Pending", Reason="", readiness=false. Elapsed: 4.111162191s
Feb 17 23:42:08.619: INFO: Pod "downwardapi-volume-37402dbf-ab12-48f3-84a0-6701e3f42059": Phase="Pending", Reason="", readiness=false. Elapsed: 6.113940928s
Feb 17 23:42:13.237: INFO: Pod "downwardapi-volume-37402dbf-ab12-48f3-84a0-6701e3f42059": Phase="Pending", Reason="", readiness=false. Elapsed: 10.731749475s
Feb 17 23:42:18.088: INFO: Pod "downwardapi-volume-37402dbf-ab12-48f3-84a0-6701e3f42059": Phase="Pending", Reason="", readiness=false. Elapsed: 15.58237554s
Feb 17 23:42:20.090: INFO: Pod "downwardapi-volume-37402dbf-ab12-48f3-84a0-6701e3f42059": Phase="Pending", Reason="", readiness=false. Elapsed: 17.5850883s
Feb 17 23:42:22.276: INFO: Pod "downwardapi-volume-37402dbf-ab12-48f3-84a0-6701e3f42059": Phase="Pending", Reason="", readiness=false. Elapsed: 19.771138164s
Feb 17 23:42:26.436: INFO: Pod "downwardapi-volume-37402dbf-ab12-48f3-84a0-6701e3f42059": Phase="Pending", Reason="", readiness=false. Elapsed: 23.930999328s
Feb 17 23:42:28.439: INFO: Pod "downwardapi-volume-37402dbf-ab12-48f3-84a0-6701e3f42059": Phase="Pending", Reason="", readiness=false. Elapsed: 25.933870802s
Feb 17 23:42:30.989: INFO: Pod "downwardapi-volume-37402dbf-ab12-48f3-84a0-6701e3f42059": Phase="Pending", Reason="", readiness=false. Elapsed: 28.48393096s
Feb 17 23:42:32.992: INFO: Pod "downwardapi-volume-37402dbf-ab12-48f3-84a0-6701e3f42059": Phase="Pending", Reason="", readiness=false. Elapsed: 30.486631072s
Feb 17 23:42:35.065: INFO: Pod "downwardapi-volume-37402dbf-ab12-48f3-84a0-6701e3f42059": Phase="Pending", Reason="", readiness=false. Elapsed: 32.559797394s
Feb 17 23:42:37.070: INFO: Pod "downwardapi-volume-37402dbf-ab12-48f3-84a0-6701e3f42059": Phase="Pending", Reason="", readiness=false. Elapsed: 34.564843584s
Feb 17 23:42:39.093: INFO: Pod "downwardapi-volume-37402dbf-ab12-48f3-84a0-6701e3f42059": Phase="Pending", Reason="", readiness=false. Elapsed: 36.588258493s
Feb 17 23:42:41.098: INFO: Pod "downwardapi-volume-37402dbf-ab12-48f3-84a0-6701e3f42059": Phase="Pending", Reason="", readiness=false. Elapsed: 38.592499826s
Feb 17 23:42:43.106: INFO: Pod "downwardapi-volume-37402dbf-ab12-48f3-84a0-6701e3f42059": Phase="Pending", Reason="", readiness=false. Elapsed: 40.601051033s
Feb 17 23:42:45.110: INFO: Pod "downwardapi-volume-37402dbf-ab12-48f3-84a0-6701e3f42059": Phase="Pending", Reason="", readiness=false. Elapsed: 42.605017882s
Feb 17 23:42:47.115: INFO: Pod "downwardapi-volume-37402dbf-ab12-48f3-84a0-6701e3f42059": Phase="Pending", Reason="", readiness=false. Elapsed: 44.60934924s
Feb 17 23:42:49.541: INFO: Pod "downwardapi-volume-37402dbf-ab12-48f3-84a0-6701e3f42059": Phase="Pending", Reason="", readiness=false. Elapsed: 47.036044727s
Feb 17 23:42:51.546: INFO: Pod "downwardapi-volume-37402dbf-ab12-48f3-84a0-6701e3f42059": Phase="Pending", Reason="", readiness=false. Elapsed: 49.040680045s
Feb 17 23:42:59.022: INFO: Pod "downwardapi-volume-37402dbf-ab12-48f3-84a0-6701e3f42059": Phase="Pending", Reason="", readiness=false. Elapsed: 56.516738674s
Feb 17 23:43:01.268: INFO: Pod "downwardapi-volume-37402dbf-ab12-48f3-84a0-6701e3f42059": Phase="Pending", Reason="", readiness=false. Elapsed: 58.762659866s
Feb 17 23:43:03.665: INFO: Pod "downwardapi-volume-37402dbf-ab12-48f3-84a0-6701e3f42059": Phase="Pending", Reason="", readiness=false. Elapsed: 1m1.159767758s
Feb 17 23:43:05.669: INFO: Pod "downwardapi-volume-37402dbf-ab12-48f3-84a0-6701e3f42059": Phase="Pending", Reason="", readiness=false. Elapsed: 1m3.163855112s
Feb 17 23:43:07.673: INFO: Pod "downwardapi-volume-37402dbf-ab12-48f3-84a0-6701e3f42059": Phase="Pending", Reason="", readiness=false. Elapsed: 1m5.168065991s
Feb 17 23:43:09.677: INFO: Pod "downwardapi-volume-37402dbf-ab12-48f3-84a0-6701e3f42059": Phase="Pending", Reason="", readiness=false. Elapsed: 1m7.171829764s
Feb 17 23:43:11.681: INFO: Pod "downwardapi-volume-37402dbf-ab12-48f3-84a0-6701e3f42059": Phase="Pending", Reason="", readiness=false. Elapsed: 1m9.175860845s
Feb 17 23:43:13.686: INFO: Pod "downwardapi-volume-37402dbf-ab12-48f3-84a0-6701e3f42059": Phase="Pending", Reason="", readiness=false. Elapsed: 1m11.180343454s
Feb 17 23:43:16.773: INFO: Pod "downwardapi-volume-37402dbf-ab12-48f3-84a0-6701e3f42059": Phase="Pending", Reason="", readiness=false. Elapsed: 1m14.26790915s
Feb 17 23:43:18.777: INFO: Pod "downwardapi-volume-37402dbf-ab12-48f3-84a0-6701e3f42059": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.272063572s
Feb 17 23:43:20.789: INFO: Pod "downwardapi-volume-37402dbf-ab12-48f3-84a0-6701e3f42059": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.284208967s
Feb 17 23:43:23.447: INFO: Pod "downwardapi-volume-37402dbf-ab12-48f3-84a0-6701e3f42059": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.942097281s
Feb 17 23:43:25.451: INFO: Pod "downwardapi-volume-37402dbf-ab12-48f3-84a0-6701e3f42059": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.945946335s
Feb 17 23:43:28.154: INFO: Pod "downwardapi-volume-37402dbf-ab12-48f3-84a0-6701e3f42059": Phase="Pending", Reason="", readiness=false. Elapsed: 1m25.648418502s
Feb 17 23:43:30.158: INFO: Pod "downwardapi-volume-37402dbf-ab12-48f3-84a0-6701e3f42059": Phase="Pending", Reason="", readiness=false. Elapsed: 1m27.65315459s
Feb 17 23:43:32.162: INFO: Pod "downwardapi-volume-37402dbf-ab12-48f3-84a0-6701e3f42059": Phase="Pending", Reason="", readiness=false. Elapsed: 1m29.65693422s
Feb 17 23:43:34.193: INFO: Pod "downwardapi-volume-37402dbf-ab12-48f3-84a0-6701e3f42059": Phase="Pending", Reason="", readiness=false. Elapsed: 1m31.687756761s
Feb 17 23:43:37.305: INFO: Pod "downwardapi-volume-37402dbf-ab12-48f3-84a0-6701e3f42059": Phase="Pending", Reason="", readiness=false. Elapsed: 1m34.799845684s
Feb 17 23:43:39.309: INFO: Pod "downwardapi-volume-37402dbf-ab12-48f3-84a0-6701e3f42059": Phase="Pending", Reason="", readiness=false. Elapsed: 1m36.803809829s
Feb 17 23:43:41.314: INFO: Pod "downwardapi-volume-37402dbf-ab12-48f3-84a0-6701e3f42059": Phase="Pending", Reason="", readiness=false. Elapsed: 1m38.808473999s
Feb 17 23:43:43.317: INFO: Pod "downwardapi-volume-37402dbf-ab12-48f3-84a0-6701e3f42059": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m40.811909381s
STEP: Saw pod success
Feb 17 23:43:43.317: INFO: Pod "downwardapi-volume-37402dbf-ab12-48f3-84a0-6701e3f42059" satisfied condition "success or failure"
Feb 17 23:43:43.319: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-37402dbf-ab12-48f3-84a0-6701e3f42059 container client-container: 
STEP: delete the pod
Feb 17 23:43:43.778: INFO: Waiting for pod downwardapi-volume-37402dbf-ab12-48f3-84a0-6701e3f42059 to disappear
Feb 17 23:43:44.017: INFO: Pod downwardapi-volume-37402dbf-ab12-48f3-84a0-6701e3f42059 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 23:43:44.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6913" for this suite.
Feb 17 23:43:50.923: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 23:43:51.002: INFO: namespace projected-6913 deletion completed in 6.982407151s

• [SLOW TEST:114.618 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 23:43:51.003: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0217 23:44:32.263559       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 17 23:44:32.263: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 23:44:32.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-4495" for this suite.
Feb 17 23:45:01.661: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 23:45:06.764: INFO: namespace gc-4495 deletion completed in 34.497668042s

• [SLOW TEST:75.761 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 23:45:06.764: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support --unix-socket=/path  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Starting the proxy
Feb 17 23:45:10.719: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix985091342/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 23:45:10.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-64" for this suite.
Feb 17 23:45:18.543: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 23:45:18.621: INFO: namespace kubectl-64 deletion completed in 6.558406323s

• [SLOW TEST:11.857 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support --unix-socket=/path  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 23:45:18.622: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Feb 17 23:45:19.522: INFO: Waiting up to 5m0s for pod "pod-bad7ec41-040c-4889-afd5-8093992c4fc7" in namespace "emptydir-9563" to be "success or failure"
Feb 17 23:45:19.554: INFO: Pod "pod-bad7ec41-040c-4889-afd5-8093992c4fc7": Phase="Pending", Reason="", readiness=false. Elapsed: 31.776908ms
Feb 17 23:45:22.713: INFO: Pod "pod-bad7ec41-040c-4889-afd5-8093992c4fc7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.191428729s
Feb 17 23:45:24.716: INFO: Pod "pod-bad7ec41-040c-4889-afd5-8093992c4fc7": Phase="Pending", Reason="", readiness=false. Elapsed: 5.193873386s
Feb 17 23:45:28.215: INFO: Pod "pod-bad7ec41-040c-4889-afd5-8093992c4fc7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.692888131s
Feb 17 23:45:31.630: INFO: Pod "pod-bad7ec41-040c-4889-afd5-8093992c4fc7": Phase="Pending", Reason="", readiness=false. Elapsed: 12.107790145s
Feb 17 23:45:33.646: INFO: Pod "pod-bad7ec41-040c-4889-afd5-8093992c4fc7": Phase="Pending", Reason="", readiness=false. Elapsed: 14.123605647s
Feb 17 23:45:35.963: INFO: Pod "pod-bad7ec41-040c-4889-afd5-8093992c4fc7": Phase="Pending", Reason="", readiness=false. Elapsed: 16.440983415s
Feb 17 23:45:38.134: INFO: Pod "pod-bad7ec41-040c-4889-afd5-8093992c4fc7": Phase="Pending", Reason="", readiness=false. Elapsed: 18.612328448s
Feb 17 23:45:40.635: INFO: Pod "pod-bad7ec41-040c-4889-afd5-8093992c4fc7": Phase="Pending", Reason="", readiness=false. Elapsed: 21.112703646s
Feb 17 23:45:44.288: INFO: Pod "pod-bad7ec41-040c-4889-afd5-8093992c4fc7": Phase="Pending", Reason="", readiness=false. Elapsed: 24.766027849s
Feb 17 23:45:46.292: INFO: Pod "pod-bad7ec41-040c-4889-afd5-8093992c4fc7": Phase="Pending", Reason="", readiness=false. Elapsed: 26.769733045s
Feb 17 23:45:48.296: INFO: Pod "pod-bad7ec41-040c-4889-afd5-8093992c4fc7": Phase="Pending", Reason="", readiness=false. Elapsed: 28.774034892s
Feb 17 23:45:50.300: INFO: Pod "pod-bad7ec41-040c-4889-afd5-8093992c4fc7": Phase="Pending", Reason="", readiness=false. Elapsed: 30.778132099s
Feb 17 23:45:52.350: INFO: Pod "pod-bad7ec41-040c-4889-afd5-8093992c4fc7": Phase="Pending", Reason="", readiness=false. Elapsed: 32.828389684s
Feb 17 23:45:54.354: INFO: Pod "pod-bad7ec41-040c-4889-afd5-8093992c4fc7": Phase="Pending", Reason="", readiness=false. Elapsed: 34.832518757s
Feb 17 23:45:56.359: INFO: Pod "pod-bad7ec41-040c-4889-afd5-8093992c4fc7": Phase="Pending", Reason="", readiness=false. Elapsed: 36.836931609s
Feb 17 23:45:58.365: INFO: Pod "pod-bad7ec41-040c-4889-afd5-8093992c4fc7": Phase="Pending", Reason="", readiness=false. Elapsed: 38.843177373s
Feb 17 23:46:00.388: INFO: Pod "pod-bad7ec41-040c-4889-afd5-8093992c4fc7": Phase="Pending", Reason="", readiness=false. Elapsed: 40.865645535s
Feb 17 23:46:02.391: INFO: Pod "pod-bad7ec41-040c-4889-afd5-8093992c4fc7": Phase="Pending", Reason="", readiness=false. Elapsed: 42.869137163s
Feb 17 23:46:04.394: INFO: Pod "pod-bad7ec41-040c-4889-afd5-8093992c4fc7": Phase="Pending", Reason="", readiness=false. Elapsed: 44.872273755s
Feb 17 23:46:06.665: INFO: Pod "pod-bad7ec41-040c-4889-afd5-8093992c4fc7": Phase="Running", Reason="", readiness=true. Elapsed: 47.143520976s
Feb 17 23:46:08.670: INFO: Pod "pod-bad7ec41-040c-4889-afd5-8093992c4fc7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 49.148066113s
STEP: Saw pod success
Feb 17 23:46:08.670: INFO: Pod "pod-bad7ec41-040c-4889-afd5-8093992c4fc7" satisfied condition "success or failure"
Feb 17 23:46:08.675: INFO: Trying to get logs from node iruya-worker pod pod-bad7ec41-040c-4889-afd5-8093992c4fc7 container test-container: 
STEP: delete the pod
Feb 17 23:46:10.461: INFO: Waiting for pod pod-bad7ec41-040c-4889-afd5-8093992c4fc7 to disappear
Feb 17 23:46:10.645: INFO: Pod pod-bad7ec41-040c-4889-afd5-8093992c4fc7 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 23:46:10.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9563" for this suite.
Feb 17 23:46:16.739: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 23:46:16.808: INFO: namespace emptydir-9563 deletion completed in 6.159712568s

• [SLOW TEST:58.186 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] PreStop
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 23:46:16.808: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167
[It] should call prestop when killing a pod  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating server pod server in namespace prestop-7456
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-7456
STEP: Deleting pre-stop pod
Feb 17 23:47:30.116: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 23:47:30.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-7456" for this suite.
Feb 17 23:48:44.196: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 23:48:44.272: INFO: namespace prestop-7456 deletion completed in 1m14.096346511s

• [SLOW TEST:147.464 seconds]
[k8s.io] [sig-node] PreStop
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should call prestop when killing a pod  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 23:48:44.273: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 23:50:27.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-4156" for this suite.
Feb 17 23:50:37.852: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 23:50:37.916: INFO: namespace container-runtime-4156 deletion completed in 10.30231671s

• [SLOW TEST:113.643 seconds]
[k8s.io] Container Runtime
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    when starting a container that exits
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
      should run with the expected status [NodeConformance] [Conformance]
      /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 23:50:37.917: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-3589
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-3589
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-3589
Feb 17 23:50:38.676: INFO: Found 0 stateful pods, waiting for 1
Feb 17 23:50:49.557: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Feb 17 23:50:49.574: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3589 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 17 23:50:50.805: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Feb 17 23:50:50.805: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 17 23:50:50.805: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 17 23:50:50.808: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Feb 17 23:51:01.658: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb 17 23:51:01.658: INFO: Waiting for statefulset status.replicas updated to 0
Feb 17 23:51:03.412: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999238s
Feb 17 23:51:04.415: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.741940808s
Feb 17 23:51:06.553: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.739104599s
Feb 17 23:51:07.984: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.601007839s
Feb 17 23:51:08.988: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.17061507s
Feb 17 23:51:09.992: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.16657818s
Feb 17 23:51:11.389: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.162615397s
Feb 17 23:51:12.393: INFO: Verifying statefulset ss doesn't scale past 1 for another 765.185905ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-3589
Feb 17 23:51:13.397: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3589 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 17 23:51:14.153: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
Feb 17 23:51:14.153: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 17 23:51:14.153: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 17 23:51:14.157: INFO: Found 1 stateful pods, waiting for 3
Feb 17 23:51:24.160: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 17 23:51:24.160: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 17 23:51:24.160: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb 17 23:51:34.910: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 17 23:51:34.911: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 17 23:51:34.911: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb 17 23:51:44.395: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 17 23:51:44.395: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 17 23:51:44.395: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb 17 23:51:54.160: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 17 23:51:54.160: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 17 23:51:54.160: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb 17 23:52:04.161: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 17 23:52:04.161: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 17 23:52:04.161: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb 17 23:52:14.160: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 17 23:52:14.160: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 17 23:52:14.160: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Feb 17 23:52:14.164: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3589 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 17 23:52:21.391: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Feb 17 23:52:21.391: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 17 23:52:21.391: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 17 23:52:21.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3589 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 17 23:52:21.687: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Feb 17 23:52:21.688: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 17 23:52:21.688: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 17 23:52:21.688: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3589 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 17 23:52:22.128: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Feb 17 23:52:22.128: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 17 23:52:22.128: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 17 23:52:22.128: INFO: Waiting for statefulset status.replicas updated to 0
Feb 17 23:52:22.223: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Feb 17 23:52:32.228: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb 17 23:52:32.228: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Feb 17 23:52:32.228: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Feb 17 23:52:32.252: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999806s
Feb 17 23:52:33.255: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.980868772s
Feb 17 23:52:34.258: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.97786999s
Feb 17 23:52:35.263: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.974710073s
Feb 17 23:52:36.267: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.970340054s
Feb 17 23:52:37.270: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.966487965s
Feb 17 23:52:38.337: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.963393432s
Feb 17 23:52:39.342: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.895831816s
Feb 17 23:52:40.346: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.891535609s
Feb 17 23:52:41.349: INFO: Verifying statefulset ss doesn't scale past 3 for another 887.228835ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-3589
Feb 17 23:52:42.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3589 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 17 23:52:42.542: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
Feb 17 23:52:42.542: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 17 23:52:42.542: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 17 23:52:42.542: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3589 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 17 23:52:42.726: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
Feb 17 23:52:42.726: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 17 23:52:42.726: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 17 23:52:42.726: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3589 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 17 23:52:42.905: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
Feb 17 23:52:42.905: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 17 23:52:42.905: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 17 23:52:42.905: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Feb 17 23:53:02.919: INFO: Deleting all statefulset in ns statefulset-3589
Feb 17 23:53:02.923: INFO: Scaling statefulset ss to 0
Feb 17 23:53:02.932: INFO: Waiting for statefulset status.replicas updated to 0
Feb 17 23:53:02.935: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 23:53:02.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-3589" for this suite.
Feb 17 23:53:08.962: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 23:53:09.054: INFO: namespace statefulset-3589 deletion completed in 6.103253815s

• [SLOW TEST:151.137 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 23:53:09.055: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Feb 17 23:53:09.123: INFO: Waiting up to 5m0s for pod "pod-147aab88-6503-47ea-8fb2-0d7b46ebe243" in namespace "emptydir-1413" to be "success or failure"
Feb 17 23:53:09.127: INFO: Pod "pod-147aab88-6503-47ea-8fb2-0d7b46ebe243": Phase="Pending", Reason="", readiness=false. Elapsed: 3.915538ms
Feb 17 23:53:11.234: INFO: Pod "pod-147aab88-6503-47ea-8fb2-0d7b46ebe243": Phase="Pending", Reason="", readiness=false. Elapsed: 2.11015863s
Feb 17 23:53:13.237: INFO: Pod "pod-147aab88-6503-47ea-8fb2-0d7b46ebe243": Phase="Pending", Reason="", readiness=false. Elapsed: 4.113527561s
Feb 17 23:53:15.241: INFO: Pod "pod-147aab88-6503-47ea-8fb2-0d7b46ebe243": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.117476926s
STEP: Saw pod success
Feb 17 23:53:15.241: INFO: Pod "pod-147aab88-6503-47ea-8fb2-0d7b46ebe243" satisfied condition "success or failure"
Feb 17 23:53:15.244: INFO: Trying to get logs from node iruya-worker pod pod-147aab88-6503-47ea-8fb2-0d7b46ebe243 container test-container: 
STEP: delete the pod
Feb 17 23:53:15.282: INFO: Waiting for pod pod-147aab88-6503-47ea-8fb2-0d7b46ebe243 to disappear
Feb 17 23:53:15.296: INFO: Pod pod-147aab88-6503-47ea-8fb2-0d7b46ebe243 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 23:53:15.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1413" for this suite.
Feb 17 23:53:21.310: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 23:53:21.426: INFO: namespace emptydir-1413 deletion completed in 6.126088646s

• [SLOW TEST:12.371 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 23:53:21.426: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb 17 23:53:25.617: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 23:53:25.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-8853" for this suite.
Feb 17 23:53:31.926: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 23:53:32.009: INFO: namespace container-runtime-8853 deletion completed in 6.10729206s

• [SLOW TEST:10.584 seconds]
[k8s.io] Container Runtime
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 23:53:32.010: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Feb 17 23:53:32.092: INFO: Waiting up to 5m0s for pod "downward-api-4afd84ec-100d-48e6-8c3e-93b45aeb71f6" in namespace "downward-api-9549" to be "success or failure"
Feb 17 23:53:32.102: INFO: Pod "downward-api-4afd84ec-100d-48e6-8c3e-93b45aeb71f6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.025063ms
Feb 17 23:53:34.106: INFO: Pod "downward-api-4afd84ec-100d-48e6-8c3e-93b45aeb71f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01405728s
Feb 17 23:53:36.111: INFO: Pod "downward-api-4afd84ec-100d-48e6-8c3e-93b45aeb71f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019011121s
STEP: Saw pod success
Feb 17 23:53:36.111: INFO: Pod "downward-api-4afd84ec-100d-48e6-8c3e-93b45aeb71f6" satisfied condition "success or failure"
Feb 17 23:53:36.114: INFO: Trying to get logs from node iruya-worker2 pod downward-api-4afd84ec-100d-48e6-8c3e-93b45aeb71f6 container dapi-container: 
STEP: delete the pod
Feb 17 23:53:36.191: INFO: Waiting for pod downward-api-4afd84ec-100d-48e6-8c3e-93b45aeb71f6 to disappear
Feb 17 23:53:36.205: INFO: Pod downward-api-4afd84ec-100d-48e6-8c3e-93b45aeb71f6 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 23:53:36.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9549" for this suite.
Feb 17 23:53:42.221: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 23:53:42.340: INFO: namespace downward-api-9549 deletion completed in 6.132383853s

• [SLOW TEST:10.330 seconds]
[sig-node] Downward API
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide host IP as an env var [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 23:53:42.341: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Feb 17 23:53:42.398: INFO: Waiting up to 5m0s for pod "pod-b20292b8-633a-4b92-9c74-171d57828548" in namespace "emptydir-1822" to be "success or failure"
Feb 17 23:53:42.437: INFO: Pod "pod-b20292b8-633a-4b92-9c74-171d57828548": Phase="Pending", Reason="", readiness=false. Elapsed: 38.695763ms
Feb 17 23:53:44.441: INFO: Pod "pod-b20292b8-633a-4b92-9c74-171d57828548": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042578437s
Feb 17 23:53:46.446: INFO: Pod "pod-b20292b8-633a-4b92-9c74-171d57828548": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.047261792s
STEP: Saw pod success
Feb 17 23:53:46.446: INFO: Pod "pod-b20292b8-633a-4b92-9c74-171d57828548" satisfied condition "success or failure"
Feb 17 23:53:46.449: INFO: Trying to get logs from node iruya-worker2 pod pod-b20292b8-633a-4b92-9c74-171d57828548 container test-container: 
STEP: delete the pod
Feb 17 23:53:46.625: INFO: Waiting for pod pod-b20292b8-633a-4b92-9c74-171d57828548 to disappear
Feb 17 23:53:46.655: INFO: Pod pod-b20292b8-633a-4b92-9c74-171d57828548 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 23:53:46.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1822" for this suite.
Feb 17 23:53:52.713: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 23:53:52.812: INFO: namespace emptydir-1822 deletion completed in 6.15246889s

• [SLOW TEST:10.471 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 23:53:52.813: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap that has name configmap-test-emptyKey-3916be5c-90ce-4631-8f24-ad7aa8f1eb7f
[AfterEach] [sig-node] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 23:53:52.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7335" for this suite.
Feb 17 23:53:58.893: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 23:53:59.010: INFO: namespace configmap-7335 deletion completed in 6.138950394s

• [SLOW TEST:6.197 seconds]
[sig-node] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should fail to create ConfigMap with empty key [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 23:53:59.010: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-6c85028e-f976-4aad-a1a1-594371f9e5f2
STEP: Creating a pod to test consume configMaps
Feb 17 23:53:59.161: INFO: Waiting up to 5m0s for pod "pod-configmaps-9a2f172d-e8fb-4f3f-b9d8-9149e6c823b8" in namespace "configmap-117" to be "success or failure"
Feb 17 23:53:59.171: INFO: Pod "pod-configmaps-9a2f172d-e8fb-4f3f-b9d8-9149e6c823b8": Phase="Pending", Reason="", readiness=false. Elapsed: 9.765827ms
Feb 17 23:54:01.227: INFO: Pod "pod-configmaps-9a2f172d-e8fb-4f3f-b9d8-9149e6c823b8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066467392s
Feb 17 23:54:03.231: INFO: Pod "pod-configmaps-9a2f172d-e8fb-4f3f-b9d8-9149e6c823b8": Phase="Running", Reason="", readiness=true. Elapsed: 4.070151464s
Feb 17 23:54:05.235: INFO: Pod "pod-configmaps-9a2f172d-e8fb-4f3f-b9d8-9149e6c823b8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.074036466s
STEP: Saw pod success
Feb 17 23:54:05.235: INFO: Pod "pod-configmaps-9a2f172d-e8fb-4f3f-b9d8-9149e6c823b8" satisfied condition "success or failure"
Feb 17 23:54:05.238: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-9a2f172d-e8fb-4f3f-b9d8-9149e6c823b8 container configmap-volume-test: 
STEP: delete the pod
Feb 17 23:54:05.261: INFO: Waiting for pod pod-configmaps-9a2f172d-e8fb-4f3f-b9d8-9149e6c823b8 to disappear
Feb 17 23:54:05.265: INFO: Pod pod-configmaps-9a2f172d-e8fb-4f3f-b9d8-9149e6c823b8 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 23:54:05.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-117" for this suite.
Feb 17 23:54:11.292: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 23:54:11.380: INFO: namespace configmap-117 deletion completed in 6.111112735s

• [SLOW TEST:12.370 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 23:54:11.380: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Feb 17 23:54:11.519: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-8198,SelfLink:/api/v1/namespaces/watch-8198/configmaps/e2e-watch-test-watch-closed,UID:83acbbce-d56c-4002-a950-278a6d458613,ResourceVersion:6947899,Generation:0,CreationTimestamp:2021-02-17 23:54:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb 17 23:54:11.519: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-8198,SelfLink:/api/v1/namespaces/watch-8198/configmaps/e2e-watch-test-watch-closed,UID:83acbbce-d56c-4002-a950-278a6d458613,ResourceVersion:6947900,Generation:0,CreationTimestamp:2021-02-17 23:54:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Feb 17 23:54:11.543: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-8198,SelfLink:/api/v1/namespaces/watch-8198/configmaps/e2e-watch-test-watch-closed,UID:83acbbce-d56c-4002-a950-278a6d458613,ResourceVersion:6947901,Generation:0,CreationTimestamp:2021-02-17 23:54:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb 17 23:54:11.544: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-8198,SelfLink:/api/v1/namespaces/watch-8198/configmaps/e2e-watch-test-watch-closed,UID:83acbbce-d56c-4002-a950-278a6d458613,ResourceVersion:6947902,Generation:0,CreationTimestamp:2021-02-17 23:54:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 23:54:11.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-8198" for this suite.
Feb 17 23:54:17.558: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 23:54:17.652: INFO: namespace watch-8198 deletion completed in 6.104427677s

• [SLOW TEST:6.271 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Job
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 23:54:17.652: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete a job [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: delete a job
STEP: deleting Job.batch foo in namespace job-163, will wait for the garbage collector to delete the pods
Feb 17 23:54:21.784: INFO: Deleting Job.batch foo took: 6.352795ms
Feb 17 23:54:22.084: INFO: Terminating Job.batch foo pods took: 300.247015ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 23:55:02.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-163" for this suite.
Feb 17 23:55:08.634: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 23:55:08.748: INFO: namespace job-163 deletion completed in 6.131442667s

• [SLOW TEST:51.096 seconds]
[sig-apps] Job
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 23:55:08.748: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-configmap-w6w8
STEP: Creating a pod to test atomic-volume-subpath
Feb 17 23:55:08.902: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-w6w8" in namespace "subpath-5428" to be "success or failure"
Feb 17 23:55:08.943: INFO: Pod "pod-subpath-test-configmap-w6w8": Phase="Pending", Reason="", readiness=false. Elapsed: 40.544574ms
Feb 17 23:55:10.947: INFO: Pod "pod-subpath-test-configmap-w6w8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044296856s
Feb 17 23:55:12.951: INFO: Pod "pod-subpath-test-configmap-w6w8": Phase="Running", Reason="", readiness=true. Elapsed: 4.049025625s
Feb 17 23:55:15.073: INFO: Pod "pod-subpath-test-configmap-w6w8": Phase="Running", Reason="", readiness=true. Elapsed: 6.170168198s
Feb 17 23:55:17.077: INFO: Pod "pod-subpath-test-configmap-w6w8": Phase="Running", Reason="", readiness=true. Elapsed: 8.175019147s
Feb 17 23:55:19.082: INFO: Pod "pod-subpath-test-configmap-w6w8": Phase="Running", Reason="", readiness=true. Elapsed: 10.179583647s
Feb 17 23:55:21.093: INFO: Pod "pod-subpath-test-configmap-w6w8": Phase="Running", Reason="", readiness=true. Elapsed: 12.190482979s
Feb 17 23:55:23.098: INFO: Pod "pod-subpath-test-configmap-w6w8": Phase="Running", Reason="", readiness=true. Elapsed: 14.195894169s
Feb 17 23:55:25.102: INFO: Pod "pod-subpath-test-configmap-w6w8": Phase="Running", Reason="", readiness=true. Elapsed: 16.199939001s
Feb 17 23:55:27.107: INFO: Pod "pod-subpath-test-configmap-w6w8": Phase="Running", Reason="", readiness=true. Elapsed: 18.204393114s
Feb 17 23:55:29.111: INFO: Pod "pod-subpath-test-configmap-w6w8": Phase="Running", Reason="", readiness=true. Elapsed: 20.208535207s
Feb 17 23:55:31.115: INFO: Pod "pod-subpath-test-configmap-w6w8": Phase="Running", Reason="", readiness=true. Elapsed: 22.212571096s
Feb 17 23:55:33.119: INFO: Pod "pod-subpath-test-configmap-w6w8": Phase="Running", Reason="", readiness=true. Elapsed: 24.216687238s
Feb 17 23:55:35.123: INFO: Pod "pod-subpath-test-configmap-w6w8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.22103707s
STEP: Saw pod success
Feb 17 23:55:35.124: INFO: Pod "pod-subpath-test-configmap-w6w8" satisfied condition "success or failure"
Feb 17 23:55:35.127: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-configmap-w6w8 container test-container-subpath-configmap-w6w8: 
STEP: delete the pod
Feb 17 23:55:35.158: INFO: Waiting for pod pod-subpath-test-configmap-w6w8 to disappear
Feb 17 23:55:35.222: INFO: Pod pod-subpath-test-configmap-w6w8 no longer exists
STEP: Deleting pod pod-subpath-test-configmap-w6w8
Feb 17 23:55:35.222: INFO: Deleting pod "pod-subpath-test-configmap-w6w8" in namespace "subpath-5428"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 23:55:35.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-5428" for this suite.
Feb 17 23:55:41.247: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 23:55:41.364: INFO: namespace subpath-5428 deletion completed in 6.135808535s

• [SLOW TEST:32.616 seconds]
[sig-storage] Subpath
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 23:55:41.364: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 17 23:55:41.464: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ffc95f72-eabb-4f7a-84db-50d33ba030dd" in namespace "projected-8417" to be "success or failure"
Feb 17 23:55:41.473: INFO: Pod "downwardapi-volume-ffc95f72-eabb-4f7a-84db-50d33ba030dd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.057577ms
Feb 17 23:55:43.477: INFO: Pod "downwardapi-volume-ffc95f72-eabb-4f7a-84db-50d33ba030dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012062712s
Feb 17 23:55:45.480: INFO: Pod "downwardapi-volume-ffc95f72-eabb-4f7a-84db-50d33ba030dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015394811s
STEP: Saw pod success
Feb 17 23:55:45.480: INFO: Pod "downwardapi-volume-ffc95f72-eabb-4f7a-84db-50d33ba030dd" satisfied condition "success or failure"
Feb 17 23:55:45.482: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-ffc95f72-eabb-4f7a-84db-50d33ba030dd container client-container: 
STEP: delete the pod
Feb 17 23:55:45.504: INFO: Waiting for pod downwardapi-volume-ffc95f72-eabb-4f7a-84db-50d33ba030dd to disappear
Feb 17 23:55:45.508: INFO: Pod downwardapi-volume-ffc95f72-eabb-4f7a-84db-50d33ba030dd no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 23:55:45.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8417" for this suite.
Feb 17 23:55:51.524: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 23:55:51.627: INFO: namespace projected-8417 deletion completed in 6.115251293s

• [SLOW TEST:10.263 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 23:55:51.627: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-2c8dae84-8ec5-4a67-bf5e-9999bcebed09
STEP: Creating a pod to test consume configMaps
Feb 17 23:55:51.709: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-819bf53c-138b-446f-8a60-2e1c8546fe8b" in namespace "projected-7153" to be "success or failure"
Feb 17 23:55:51.733: INFO: Pod "pod-projected-configmaps-819bf53c-138b-446f-8a60-2e1c8546fe8b": Phase="Pending", Reason="", readiness=false. Elapsed: 24.38ms
Feb 17 23:55:53.737: INFO: Pod "pod-projected-configmaps-819bf53c-138b-446f-8a60-2e1c8546fe8b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028382812s
Feb 17 23:55:55.741: INFO: Pod "pod-projected-configmaps-819bf53c-138b-446f-8a60-2e1c8546fe8b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032473531s
STEP: Saw pod success
Feb 17 23:55:55.741: INFO: Pod "pod-projected-configmaps-819bf53c-138b-446f-8a60-2e1c8546fe8b" satisfied condition "success or failure"
Feb 17 23:55:55.745: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-819bf53c-138b-446f-8a60-2e1c8546fe8b container projected-configmap-volume-test: 
STEP: delete the pod
Feb 17 23:55:55.765: INFO: Waiting for pod pod-projected-configmaps-819bf53c-138b-446f-8a60-2e1c8546fe8b to disappear
Feb 17 23:55:55.782: INFO: Pod pod-projected-configmaps-819bf53c-138b-446f-8a60-2e1c8546fe8b no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 23:55:55.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7153" for this suite.
Feb 17 23:56:01.803: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 23:56:01.889: INFO: namespace projected-7153 deletion completed in 6.104221181s

• [SLOW TEST:10.262 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 23:56:01.890: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl rolling-update
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516
[It] should support rolling-update to same image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 17 23:56:01.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-1798'
Feb 17 23:56:02.040: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 17 23:56:02.041: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
Feb 17 23:56:02.059: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Feb 17 23:56:02.066: INFO: scanned /root for discovery docs: 
Feb 17 23:56:02.066: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-1798'
Feb 17 23:56:17.996: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Feb 17 23:56:17.997: INFO: stdout: "Created e2e-test-nginx-rc-29d33d1002dc2eb9b0e4e9b9137040ff\nScaling up e2e-test-nginx-rc-29d33d1002dc2eb9b0e4e9b9137040ff from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-29d33d1002dc2eb9b0e4e9b9137040ff up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-29d33d1002dc2eb9b0e4e9b9137040ff to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Feb 17 23:56:17.997: INFO: stdout: "Created e2e-test-nginx-rc-29d33d1002dc2eb9b0e4e9b9137040ff\nScaling up e2e-test-nginx-rc-29d33d1002dc2eb9b0e4e9b9137040ff from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-29d33d1002dc2eb9b0e4e9b9137040ff up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-29d33d1002dc2eb9b0e4e9b9137040ff to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Feb 17 23:56:17.997: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-1798'
Feb 17 23:56:18.089: INFO: stderr: ""
Feb 17 23:56:18.090: INFO: stdout: "e2e-test-nginx-rc-29d33d1002dc2eb9b0e4e9b9137040ff-kmxw9 e2e-test-nginx-rc-n5xfk "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Feb 17 23:56:23.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-1798'
Feb 17 23:56:23.191: INFO: stderr: ""
Feb 17 23:56:23.191: INFO: stdout: "e2e-test-nginx-rc-29d33d1002dc2eb9b0e4e9b9137040ff-kmxw9 "
Feb 17 23:56:23.192: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-29d33d1002dc2eb9b0e4e9b9137040ff-kmxw9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1798'
Feb 17 23:56:23.283: INFO: stderr: ""
Feb 17 23:56:23.283: INFO: stdout: "true"
Feb 17 23:56:23.283: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-29d33d1002dc2eb9b0e4e9b9137040ff-kmxw9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1798'
Feb 17 23:56:23.379: INFO: stderr: ""
Feb 17 23:56:23.379: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Feb 17 23:56:23.379: INFO: e2e-test-nginx-rc-29d33d1002dc2eb9b0e4e9b9137040ff-kmxw9 is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522
Feb 17 23:56:23.379: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-1798'
Feb 17 23:56:23.497: INFO: stderr: ""
Feb 17 23:56:23.497: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 23:56:23.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1798" for this suite.
Feb 17 23:56:45.535: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 23:56:45.688: INFO: namespace kubectl-1798 deletion completed in 22.182412231s

• [SLOW TEST:43.798 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl rolling-update
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support rolling-update to same image  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 23:56:45.688: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Feb 17 23:56:52.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-d2981f7d-acff-48fd-aa25-ad8498af316d -c busybox-main-container --namespace=emptydir-7916 -- cat /usr/share/volumeshare/shareddata.txt'
Feb 17 23:56:52.263: INFO: stderr: ""
Feb 17 23:56:52.263: INFO: stdout: "Hello from the busy-box sub-container\n"
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 23:56:52.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7916" for this suite.
Feb 17 23:56:58.312: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 23:56:58.400: INFO: namespace emptydir-7916 deletion completed in 6.132198164s

• [SLOW TEST:12.711 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  pod should support shared volumes between containers [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 23:56:58.400: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if v1 is in available api versions  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating api versions
Feb 17 23:56:58.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Feb 17 23:56:59.211: INFO: stderr: ""
Feb 17 23:56:59.211: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npingcap.com/v1alpha1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 23:56:59.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4845" for this suite.
Feb 17 23:57:05.239: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 23:57:05.334: INFO: namespace kubectl-4845 deletion completed in 6.116074585s

• [SLOW TEST:6.934 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl api-versions
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if v1 is in available api versions  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 23:57:05.335: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Feb 17 23:57:09.970: INFO: Successfully updated pod "annotationupdate271c2967-f634-4ff9-b81a-0f21c28d0d32"
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 23:57:11.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-138" for this suite.
Feb 17 23:57:34.008: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 23:57:34.100: INFO: namespace downward-api-138 deletion completed in 22.105571599s

• [SLOW TEST:28.766 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 23:57:34.101: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override all
Feb 17 23:57:34.171: INFO: Waiting up to 5m0s for pod "client-containers-b5f7fbe6-a6b2-4fed-b1d5-b2bd35aa9b0d" in namespace "containers-3122" to be "success or failure"
Feb 17 23:57:34.192: INFO: Pod "client-containers-b5f7fbe6-a6b2-4fed-b1d5-b2bd35aa9b0d": Phase="Pending", Reason="", readiness=false. Elapsed: 21.417595ms
Feb 17 23:57:36.197: INFO: Pod "client-containers-b5f7fbe6-a6b2-4fed-b1d5-b2bd35aa9b0d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025496111s
Feb 17 23:57:38.271: INFO: Pod "client-containers-b5f7fbe6-a6b2-4fed-b1d5-b2bd35aa9b0d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.100007466s
STEP: Saw pod success
Feb 17 23:57:38.271: INFO: Pod "client-containers-b5f7fbe6-a6b2-4fed-b1d5-b2bd35aa9b0d" satisfied condition "success or failure"
Feb 17 23:57:38.274: INFO: Trying to get logs from node iruya-worker2 pod client-containers-b5f7fbe6-a6b2-4fed-b1d5-b2bd35aa9b0d container test-container: 
STEP: delete the pod
Feb 17 23:57:38.321: INFO: Waiting for pod client-containers-b5f7fbe6-a6b2-4fed-b1d5-b2bd35aa9b0d to disappear
Feb 17 23:57:38.325: INFO: Pod client-containers-b5f7fbe6-a6b2-4fed-b1d5-b2bd35aa9b0d no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 23:57:38.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-3122" for this suite.
Feb 17 23:57:44.348: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 23:57:44.428: INFO: namespace containers-3122 deletion completed in 6.098259221s

• [SLOW TEST:10.327 seconds]
[k8s.io] Docker Containers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 23:57:44.428: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-b88b4799-14bf-415b-820f-33c193e3deff
STEP: Creating a pod to test consume configMaps
Feb 17 23:57:44.531: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-86cbb7b8-7dca-456d-9da1-558a5695964c" in namespace "projected-4423" to be "success or failure"
Feb 17 23:57:44.559: INFO: Pod "pod-projected-configmaps-86cbb7b8-7dca-456d-9da1-558a5695964c": Phase="Pending", Reason="", readiness=false. Elapsed: 27.979604ms
Feb 17 23:57:46.600: INFO: Pod "pod-projected-configmaps-86cbb7b8-7dca-456d-9da1-558a5695964c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068799179s
Feb 17 23:57:48.604: INFO: Pod "pod-projected-configmaps-86cbb7b8-7dca-456d-9da1-558a5695964c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.072600742s
STEP: Saw pod success
Feb 17 23:57:48.604: INFO: Pod "pod-projected-configmaps-86cbb7b8-7dca-456d-9da1-558a5695964c" satisfied condition "success or failure"
Feb 17 23:57:48.607: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-86cbb7b8-7dca-456d-9da1-558a5695964c container projected-configmap-volume-test: 
STEP: delete the pod
Feb 17 23:57:48.627: INFO: Waiting for pod pod-projected-configmaps-86cbb7b8-7dca-456d-9da1-558a5695964c to disappear
Feb 17 23:57:48.631: INFO: Pod pod-projected-configmaps-86cbb7b8-7dca-456d-9da1-558a5695964c no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 23:57:48.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4423" for this suite.
Feb 17 23:57:54.647: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 23:57:54.737: INFO: namespace projected-4423 deletion completed in 6.103254918s

• [SLOW TEST:10.310 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 23:57:54.738: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override arguments
Feb 17 23:57:54.854: INFO: Waiting up to 5m0s for pod "client-containers-c9d17d9d-94b2-486e-b3d9-bf0d1a43716b" in namespace "containers-650" to be "success or failure"
Feb 17 23:57:54.857: INFO: Pod "client-containers-c9d17d9d-94b2-486e-b3d9-bf0d1a43716b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.167821ms
Feb 17 23:57:56.861: INFO: Pod "client-containers-c9d17d9d-94b2-486e-b3d9-bf0d1a43716b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007259487s
Feb 17 23:57:58.865: INFO: Pod "client-containers-c9d17d9d-94b2-486e-b3d9-bf0d1a43716b": Phase="Running", Reason="", readiness=true. Elapsed: 4.011587688s
Feb 17 23:58:00.869: INFO: Pod "client-containers-c9d17d9d-94b2-486e-b3d9-bf0d1a43716b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015871848s
STEP: Saw pod success
Feb 17 23:58:00.870: INFO: Pod "client-containers-c9d17d9d-94b2-486e-b3d9-bf0d1a43716b" satisfied condition "success or failure"
Feb 17 23:58:00.873: INFO: Trying to get logs from node iruya-worker pod client-containers-c9d17d9d-94b2-486e-b3d9-bf0d1a43716b container test-container: 
STEP: delete the pod
Feb 17 23:58:00.895: INFO: Waiting for pod client-containers-c9d17d9d-94b2-486e-b3d9-bf0d1a43716b to disappear
Feb 17 23:58:00.897: INFO: Pod client-containers-c9d17d9d-94b2-486e-b3d9-bf0d1a43716b no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 23:58:00.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-650" for this suite.
Feb 17 23:58:06.913: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 23:58:07.003: INFO: namespace containers-650 deletion completed in 6.102749121s

• [SLOW TEST:12.265 seconds]
[k8s.io] Docker Containers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 23:58:07.003: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test env composition
Feb 17 23:58:07.103: INFO: Waiting up to 5m0s for pod "var-expansion-4670ad4c-56d1-42cf-936e-dcacdb1495ba" in namespace "var-expansion-1356" to be "success or failure"
Feb 17 23:58:07.107: INFO: Pod "var-expansion-4670ad4c-56d1-42cf-936e-dcacdb1495ba": Phase="Pending", Reason="", readiness=false. Elapsed: 3.990649ms
Feb 17 23:58:09.130: INFO: Pod "var-expansion-4670ad4c-56d1-42cf-936e-dcacdb1495ba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026433271s
Feb 17 23:58:11.134: INFO: Pod "var-expansion-4670ad4c-56d1-42cf-936e-dcacdb1495ba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030639186s
STEP: Saw pod success
Feb 17 23:58:11.134: INFO: Pod "var-expansion-4670ad4c-56d1-42cf-936e-dcacdb1495ba" satisfied condition "success or failure"
Feb 17 23:58:11.137: INFO: Trying to get logs from node iruya-worker pod var-expansion-4670ad4c-56d1-42cf-936e-dcacdb1495ba container dapi-container: 
STEP: delete the pod
Feb 17 23:58:11.266: INFO: Waiting for pod var-expansion-4670ad4c-56d1-42cf-936e-dcacdb1495ba to disappear
Feb 17 23:58:11.563: INFO: Pod var-expansion-4670ad4c-56d1-42cf-936e-dcacdb1495ba no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 23:58:11.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-1356" for this suite.
Feb 17 23:58:17.639: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 23:58:17.718: INFO: namespace var-expansion-1356 deletion completed in 6.151712316s

• [SLOW TEST:10.715 seconds]
[k8s.io] Variable Expansion
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] HostPath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 23:58:17.719: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test hostPath mode
Feb 17 23:58:17.890: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-8958" to be "success or failure"
Feb 17 23:58:17.902: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 11.460366ms
Feb 17 23:58:19.906: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01568027s
Feb 17 23:58:21.924: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033562359s
Feb 17 23:58:23.931: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.040965144s
STEP: Saw pod success
Feb 17 23:58:23.931: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Feb 17 23:58:23.933: INFO: Trying to get logs from node iruya-worker2 pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Feb 17 23:58:23.986: INFO: Waiting for pod pod-host-path-test to disappear
Feb 17 23:58:24.079: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 23:58:24.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-8958" for this suite.
Feb 17 23:58:30.121: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 23:58:30.326: INFO: namespace hostpath-8958 deletion completed in 6.242440412s

• [SLOW TEST:12.607 seconds]
[sig-storage] HostPath
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 23:58:30.326: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating replication controller my-hostname-basic-a4e922b1-01ec-404f-a4a3-2eebd23294aa
Feb 17 23:58:30.454: INFO: Pod name my-hostname-basic-a4e922b1-01ec-404f-a4a3-2eebd23294aa: Found 0 pods out of 1
Feb 17 23:58:35.459: INFO: Pod name my-hostname-basic-a4e922b1-01ec-404f-a4a3-2eebd23294aa: Found 1 pods out of 1
Feb 17 23:58:35.459: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-a4e922b1-01ec-404f-a4a3-2eebd23294aa" are running
Feb 17 23:58:35.463: INFO: Pod "my-hostname-basic-a4e922b1-01ec-404f-a4a3-2eebd23294aa-9xdn2" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-02-17 23:58:30 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-02-17 23:58:33 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-02-17 23:58:33 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-02-17 23:58:30 +0000 UTC Reason: Message:}])
Feb 17 23:58:35.463: INFO: Trying to dial the pod
Feb 17 23:58:40.474: INFO: Controller my-hostname-basic-a4e922b1-01ec-404f-a4a3-2eebd23294aa: Got expected result from replica 1 [my-hostname-basic-a4e922b1-01ec-404f-a4a3-2eebd23294aa-9xdn2]: "my-hostname-basic-a4e922b1-01ec-404f-a4a3-2eebd23294aa-9xdn2", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 23:58:40.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-8568" for this suite.
Feb 17 23:58:46.495: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 23:58:46.584: INFO: namespace replication-controller-8568 deletion completed in 6.107281914s

• [SLOW TEST:16.258 seconds]
[sig-apps] ReplicationController
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 23:58:46.585: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-1549
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb 17 23:58:46.697: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb 17 23:59:10.867: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.170 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1549 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 17 23:59:10.868: INFO: >>> kubeConfig: /root/.kube/config
Feb 17 23:59:12.024: INFO: Found all expected endpoints: [netserver-0]
Feb 17 23:59:12.038: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.67 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1549 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 17 23:59:12.038: INFO: >>> kubeConfig: /root/.kube/config
Feb 17 23:59:13.142: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 23:59:13.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-1549" for this suite.
Feb 17 23:59:35.179: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 17 23:59:35.269: INFO: namespace pod-network-test-1549 deletion completed in 22.122228438s

• [SLOW TEST:48.685 seconds]
[sig-network] Networking
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 17 23:59:35.270: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-feb87ddc-9de3-45c0-ba8b-3ae6e3074d16
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 17 23:59:41.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7562" for this suite.
Feb 18 00:00:03.430: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:00:03.516: INFO: namespace configmap-7562 deletion completed in 22.104734293s

• [SLOW TEST:28.246 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:00:03.517: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:00:29.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-8128" for this suite.
Feb 18 00:00:35.826: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:00:35.920: INFO: namespace namespaces-8128 deletion completed in 6.11068411s
STEP: Destroying namespace "nsdeletetest-4027" for this suite.
Feb 18 00:00:35.922: INFO: Namespace nsdeletetest-4027 was already deleted
STEP: Destroying namespace "nsdeletetest-9105" for this suite.
Feb 18 00:00:41.946: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:00:42.045: INFO: namespace nsdeletetest-9105 deletion completed in 6.122734326s

• [SLOW TEST:38.529 seconds]
[sig-api-machinery] Namespaces [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:00:42.045: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should get a host IP [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating pod
Feb 18 00:00:46.521: INFO: Pod pod-hostip-da07e396-9aec-4201-b25e-fee0612a252b has hostIP: 172.18.0.7
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:00:46.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2583" for this suite.
Feb 18 00:01:10.543: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:01:10.624: INFO: namespace pods-2583 deletion completed in 24.0996757s

• [SLOW TEST:28.579 seconds]
[k8s.io] Pods
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should get a host IP [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:01:10.625: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 18 00:01:10.932: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"b970c575-f0f6-474a-a43f-c98f2c331874", Controller:(*bool)(0xc001ba727a), BlockOwnerDeletion:(*bool)(0xc001ba727b)}}
Feb 18 00:01:10.937: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"00a6c7bf-05a3-4868-acf8-2ccb57386cce", Controller:(*bool)(0xc002840ada), BlockOwnerDeletion:(*bool)(0xc002840adb)}}
Feb 18 00:01:10.961: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"3d12f456-a133-46aa-9030-0614cc213e05", Controller:(*bool)(0xc001cd512a), BlockOwnerDeletion:(*bool)(0xc001cd512b)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:01:15.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-6197" for this suite.
Feb 18 00:01:22.018: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:01:22.115: INFO: namespace gc-6197 deletion completed in 6.131794545s

• [SLOW TEST:11.490 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:01:22.115: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run rc
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456
[It] should create an rc from an image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 18 00:01:22.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-4036'
Feb 18 00:01:22.478: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 18 00:01:22.478: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Feb 18 00:01:22.490: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-pw67v]
Feb 18 00:01:22.491: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-pw67v" in namespace "kubectl-4036" to be "running and ready"
Feb 18 00:01:22.590: INFO: Pod "e2e-test-nginx-rc-pw67v": Phase="Pending", Reason="", readiness=false. Elapsed: 99.120582ms
Feb 18 00:01:24.594: INFO: Pod "e2e-test-nginx-rc-pw67v": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102849276s
Feb 18 00:01:26.597: INFO: Pod "e2e-test-nginx-rc-pw67v": Phase="Running", Reason="", readiness=true. Elapsed: 4.106533075s
Feb 18 00:01:26.597: INFO: Pod "e2e-test-nginx-rc-pw67v" satisfied condition "running and ready"
Feb 18 00:01:26.597: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-pw67v]
Feb 18 00:01:26.597: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-4036'
Feb 18 00:01:26.713: INFO: stderr: ""
Feb 18 00:01:26.713: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461
Feb 18 00:01:26.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-4036'
Feb 18 00:01:26.820: INFO: stderr: ""
Feb 18 00:01:26.820: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:01:26.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4036" for this suite.
Feb 18 00:01:48.861: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:01:48.946: INFO: namespace kubectl-4036 deletion completed in 22.109965345s

• [SLOW TEST:26.831 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run rc
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc from an image  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:01:48.946: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Feb 18 00:01:53.541: INFO: Successfully updated pod "pod-update-002208c5-c4f4-4a5c-bb0b-ad20cb738030"
STEP: verifying the updated pod is in kubernetes
Feb 18 00:01:53.587: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:01:53.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-727" for this suite.
Feb 18 00:02:15.606: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:02:15.743: INFO: namespace pods-727 deletion completed in 22.15260462s

• [SLOW TEST:26.796 seconds]
[k8s.io] Pods
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:02:15.743: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:02:19.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-857" for this suite.
Feb 18 00:03:01.948: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:03:02.064: INFO: namespace kubelet-test-857 deletion completed in 42.133285802s

• [SLOW TEST:46.321 seconds]
[k8s.io] Kubelet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a read only busybox container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:03:02.065: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 18 00:03:02.127: INFO: Waiting up to 5m0s for pod "downwardapi-volume-285ce4c9-4fe5-433d-9c5d-949548d08fd4" in namespace "downward-api-1628" to be "success or failure"
Feb 18 00:03:02.144: INFO: Pod "downwardapi-volume-285ce4c9-4fe5-433d-9c5d-949548d08fd4": Phase="Pending", Reason="", readiness=false. Elapsed: 17.530345ms
Feb 18 00:03:04.165: INFO: Pod "downwardapi-volume-285ce4c9-4fe5-433d-9c5d-949548d08fd4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038586691s
Feb 18 00:03:06.169: INFO: Pod "downwardapi-volume-285ce4c9-4fe5-433d-9c5d-949548d08fd4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042248382s
STEP: Saw pod success
Feb 18 00:03:06.169: INFO: Pod "downwardapi-volume-285ce4c9-4fe5-433d-9c5d-949548d08fd4" satisfied condition "success or failure"
Feb 18 00:03:06.171: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-285ce4c9-4fe5-433d-9c5d-949548d08fd4 container client-container: 
STEP: delete the pod
Feb 18 00:03:06.210: INFO: Waiting for pod downwardapi-volume-285ce4c9-4fe5-433d-9c5d-949548d08fd4 to disappear
Feb 18 00:03:06.249: INFO: Pod downwardapi-volume-285ce4c9-4fe5-433d-9c5d-949548d08fd4 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:03:06.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1628" for this suite.
Feb 18 00:03:12.289: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:03:12.392: INFO: namespace downward-api-1628 deletion completed in 6.139386886s

• [SLOW TEST:10.328 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:03:12.393: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should provide secure master service  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:03:12.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2608" for this suite.
Feb 18 00:03:18.481: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:03:18.597: INFO: namespace services-2608 deletion completed in 6.126597328s
[AfterEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:6.204 seconds]
[sig-network] Services
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide secure master service  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:03:18.597: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0218 00:03:28.698414       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 18 00:03:28.698: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:03:28.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-8928" for this suite.
Feb 18 00:03:34.715: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:03:34.798: INFO: namespace gc-8928 deletion completed in 6.096266594s

• [SLOW TEST:16.201 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:03:34.799: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 18 00:04:03.079: INFO: Container started at 2021-02-18 00:03:38 +0000 UTC, pod became ready at 2021-02-18 00:04:01 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:04:03.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-4592" for this suite.
Feb 18 00:04:27.099: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:04:27.232: INFO: namespace container-probe-4592 deletion completed in 24.149549957s

• [SLOW TEST:52.433 seconds]
[k8s.io] Probing container
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:04:27.232: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with configMap that has name projected-configmap-test-upd-450a2bbb-42d4-466d-a4cd-41cfc3c634f5
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-450a2bbb-42d4-466d-a4cd-41cfc3c634f5
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:04:33.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1551" for this suite.
Feb 18 00:04:57.370: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:04:57.460: INFO: namespace projected-1551 deletion completed in 24.108565945s

• [SLOW TEST:30.228 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:04:57.461: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6171.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-6171.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6171.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-6171.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 18 00:05:05.602: INFO: DNS probes using dns-test-0cf88509-40b0-4bb3-8e1a-8c6d62a58040 succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6171.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-6171.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6171.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-6171.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 18 00:05:15.737: INFO: File wheezy_udp@dns-test-service-3.dns-6171.svc.cluster.local from pod  dns-6171/dns-test-94a88580-47df-4e0c-b0fb-a8e198133fe3 contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb 18 00:05:15.741: INFO: File jessie_udp@dns-test-service-3.dns-6171.svc.cluster.local from pod  dns-6171/dns-test-94a88580-47df-4e0c-b0fb-a8e198133fe3 contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb 18 00:05:15.741: INFO: Lookups using dns-6171/dns-test-94a88580-47df-4e0c-b0fb-a8e198133fe3 failed for: [wheezy_udp@dns-test-service-3.dns-6171.svc.cluster.local jessie_udp@dns-test-service-3.dns-6171.svc.cluster.local]

Feb 18 00:05:20.746: INFO: File wheezy_udp@dns-test-service-3.dns-6171.svc.cluster.local from pod  dns-6171/dns-test-94a88580-47df-4e0c-b0fb-a8e198133fe3 contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb 18 00:05:20.750: INFO: File jessie_udp@dns-test-service-3.dns-6171.svc.cluster.local from pod  dns-6171/dns-test-94a88580-47df-4e0c-b0fb-a8e198133fe3 contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb 18 00:05:20.751: INFO: Lookups using dns-6171/dns-test-94a88580-47df-4e0c-b0fb-a8e198133fe3 failed for: [wheezy_udp@dns-test-service-3.dns-6171.svc.cluster.local jessie_udp@dns-test-service-3.dns-6171.svc.cluster.local]

Feb 18 00:05:25.746: INFO: File wheezy_udp@dns-test-service-3.dns-6171.svc.cluster.local from pod  dns-6171/dns-test-94a88580-47df-4e0c-b0fb-a8e198133fe3 contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb 18 00:05:25.750: INFO: File jessie_udp@dns-test-service-3.dns-6171.svc.cluster.local from pod  dns-6171/dns-test-94a88580-47df-4e0c-b0fb-a8e198133fe3 contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb 18 00:05:25.750: INFO: Lookups using dns-6171/dns-test-94a88580-47df-4e0c-b0fb-a8e198133fe3 failed for: [wheezy_udp@dns-test-service-3.dns-6171.svc.cluster.local jessie_udp@dns-test-service-3.dns-6171.svc.cluster.local]

Feb 18 00:05:30.746: INFO: File wheezy_udp@dns-test-service-3.dns-6171.svc.cluster.local from pod  dns-6171/dns-test-94a88580-47df-4e0c-b0fb-a8e198133fe3 contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb 18 00:05:30.750: INFO: File jessie_udp@dns-test-service-3.dns-6171.svc.cluster.local from pod  dns-6171/dns-test-94a88580-47df-4e0c-b0fb-a8e198133fe3 contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb 18 00:05:30.750: INFO: Lookups using dns-6171/dns-test-94a88580-47df-4e0c-b0fb-a8e198133fe3 failed for: [wheezy_udp@dns-test-service-3.dns-6171.svc.cluster.local jessie_udp@dns-test-service-3.dns-6171.svc.cluster.local]

Feb 18 00:05:35.749: INFO: DNS probes using dns-test-94a88580-47df-4e0c-b0fb-a8e198133fe3 succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6171.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-6171.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6171.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-6171.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 18 00:05:44.475: INFO: DNS probes using dns-test-86f1aabe-4afa-4920-a28f-cc59075d33c3 succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:05:44.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6171" for this suite.
Feb 18 00:05:50.611: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:05:50.695: INFO: namespace dns-6171 deletion completed in 6.098452928s

• [SLOW TEST:53.235 seconds]
[sig-network] DNS
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:05:50.696: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-70793a4e-0268-492a-83e0-e66b7e59f971
STEP: Creating a pod to test consume secrets
Feb 18 00:05:50.814: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ad7c63ad-c77d-41bf-9d98-b072b68d9398" in namespace "projected-6512" to be "success or failure"
Feb 18 00:05:50.824: INFO: Pod "pod-projected-secrets-ad7c63ad-c77d-41bf-9d98-b072b68d9398": Phase="Pending", Reason="", readiness=false. Elapsed: 10.034956ms
Feb 18 00:05:52.847: INFO: Pod "pod-projected-secrets-ad7c63ad-c77d-41bf-9d98-b072b68d9398": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033607769s
Feb 18 00:05:54.851: INFO: Pod "pod-projected-secrets-ad7c63ad-c77d-41bf-9d98-b072b68d9398": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037541753s
STEP: Saw pod success
Feb 18 00:05:54.851: INFO: Pod "pod-projected-secrets-ad7c63ad-c77d-41bf-9d98-b072b68d9398" satisfied condition "success or failure"
Feb 18 00:05:54.854: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-ad7c63ad-c77d-41bf-9d98-b072b68d9398 container projected-secret-volume-test: 
STEP: delete the pod
Feb 18 00:05:54.891: INFO: Waiting for pod pod-projected-secrets-ad7c63ad-c77d-41bf-9d98-b072b68d9398 to disappear
Feb 18 00:05:54.901: INFO: Pod pod-projected-secrets-ad7c63ad-c77d-41bf-9d98-b072b68d9398 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:05:54.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6512" for this suite.
Feb 18 00:06:00.942: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:06:01.035: INFO: namespace projected-6512 deletion completed in 6.131784257s

• [SLOW TEST:10.340 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:06:01.036: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-8582/configmap-test-5145d778-6508-41c3-913e-009fba5a5903
STEP: Creating a pod to test consume configMaps
Feb 18 00:06:01.136: INFO: Waiting up to 5m0s for pod "pod-configmaps-fcca451a-049b-441a-bd02-5a37058cb19d" in namespace "configmap-8582" to be "success or failure"
Feb 18 00:06:01.167: INFO: Pod "pod-configmaps-fcca451a-049b-441a-bd02-5a37058cb19d": Phase="Pending", Reason="", readiness=false. Elapsed: 30.499592ms
Feb 18 00:06:03.171: INFO: Pod "pod-configmaps-fcca451a-049b-441a-bd02-5a37058cb19d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034342579s
Feb 18 00:06:05.178: INFO: Pod "pod-configmaps-fcca451a-049b-441a-bd02-5a37058cb19d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041733064s
STEP: Saw pod success
Feb 18 00:06:05.178: INFO: Pod "pod-configmaps-fcca451a-049b-441a-bd02-5a37058cb19d" satisfied condition "success or failure"
Feb 18 00:06:05.181: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-fcca451a-049b-441a-bd02-5a37058cb19d container env-test: 
STEP: delete the pod
Feb 18 00:06:05.241: INFO: Waiting for pod pod-configmaps-fcca451a-049b-441a-bd02-5a37058cb19d to disappear
Feb 18 00:06:05.247: INFO: Pod pod-configmaps-fcca451a-049b-441a-bd02-5a37058cb19d no longer exists
[AfterEach] [sig-node] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:06:05.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8582" for this suite.
Feb 18 00:06:11.322: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:06:11.415: INFO: namespace configmap-8582 deletion completed in 6.160665022s

• [SLOW TEST:10.380 seconds]
[sig-node] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:06:11.416: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve a basic endpoint from pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service endpoint-test2 in namespace services-3449
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3449 to expose endpoints map[]
Feb 18 00:06:11.580: INFO: Get endpoints failed (49.257094ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Feb 18 00:06:12.584: INFO: successfully validated that service endpoint-test2 in namespace services-3449 exposes endpoints map[] (1.053496424s elapsed)
STEP: Creating pod pod1 in namespace services-3449
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3449 to expose endpoints map[pod1:[80]]
Feb 18 00:06:16.864: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.272838237s elapsed, will retry)
Feb 18 00:06:17.870: INFO: successfully validated that service endpoint-test2 in namespace services-3449 exposes endpoints map[pod1:[80]] (5.278779191s elapsed)
STEP: Creating pod pod2 in namespace services-3449
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3449 to expose endpoints map[pod1:[80] pod2:[80]]
Feb 18 00:06:22.457: INFO: successfully validated that service endpoint-test2 in namespace services-3449 exposes endpoints map[pod1:[80] pod2:[80]] (4.583517734s elapsed)
STEP: Deleting pod pod1 in namespace services-3449
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3449 to expose endpoints map[pod2:[80]]
Feb 18 00:06:23.488: INFO: successfully validated that service endpoint-test2 in namespace services-3449 exposes endpoints map[pod2:[80]] (1.02596917s elapsed)
STEP: Deleting pod pod2 in namespace services-3449
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3449 to expose endpoints map[]
Feb 18 00:06:24.510: INFO: successfully validated that service endpoint-test2 in namespace services-3449 exposes endpoints map[] (1.008127062s elapsed)
[AfterEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:06:24.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3449" for this suite.
Feb 18 00:06:46.852: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:06:46.931: INFO: namespace services-3449 deletion completed in 22.210243278s
[AfterEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:35.516 seconds]
[sig-network] Services
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:06:46.932: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should delete old replica sets [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 18 00:06:47.202: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Feb 18 00:06:52.207: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb 18 00:06:52.207: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Feb 18 00:06:52.231: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-2019,SelfLink:/apis/apps/v1/namespaces/deployment-2019/deployments/test-cleanup-deployment,UID:3ae1cb09-7b7d-4508-849d-8490537be287,ResourceVersion:6950481,Generation:1,CreationTimestamp:2021-02-18 00:06:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}

Feb 18 00:06:52.238: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-2019,SelfLink:/apis/apps/v1/namespaces/deployment-2019/replicasets/test-cleanup-deployment-55bbcbc84c,UID:44c9ed23-aaf2-4023-9b17-be868fe43e29,ResourceVersion:6950483,Generation:1,CreationTimestamp:2021-02-18 00:06:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 3ae1cb09-7b7d-4508-849d-8490537be287 0xc002091677 0xc002091678}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb 18 00:06:52.238: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment":
Feb 18 00:06:52.239: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-2019,SelfLink:/apis/apps/v1/namespaces/deployment-2019/replicasets/test-cleanup-controller,UID:98b4082e-e7c7-48b6-9cea-1849e1a71996,ResourceVersion:6950482,Generation:1,CreationTimestamp:2021-02-18 00:06:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 3ae1cb09-7b7d-4508-849d-8490537be287 0xc0020915a7 0xc0020915a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Feb 18 00:06:52.312: INFO: Pod "test-cleanup-controller-xf7dg" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-xf7dg,GenerateName:test-cleanup-controller-,Namespace:deployment-2019,SelfLink:/api/v1/namespaces/deployment-2019/pods/test-cleanup-controller-xf7dg,UID:1c1cb30b-56c1-4a8a-b6aa-58aaedc91ca9,ResourceVersion:6950476,Generation:0,CreationTimestamp:2021-02-18 00:06:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 98b4082e-e7c7-48b6-9cea-1849e1a71996 0xc002091f57 0xc002091f58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2tnp4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2tnp4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-2tnp4 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002091fd0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002091ff0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 00:06:47 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 00:06:50 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 00:06:50 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 00:06:47 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.7,PodIP:10.244.2.80,StartTime:2021-02-18 00:06:47 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2021-02-18 00:06:50 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://650dc8f574f0df8d10f0b416b1f74b2a4286cdbfc64b449e282bf1256c463ee7}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 18 00:06:52.312: INFO: Pod "test-cleanup-deployment-55bbcbc84c-sm8gj" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-sm8gj,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-2019,SelfLink:/api/v1/namespaces/deployment-2019/pods/test-cleanup-deployment-55bbcbc84c-sm8gj,UID:dd21bef1-2dde-468d-a184-3afb72fd5f82,ResourceVersion:6950487,Generation:0,CreationTimestamp:2021-02-18 00:06:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c 44c9ed23-aaf2-4023-9b17-be868fe43e29 0xc002a862e7 0xc002a862e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2tnp4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2tnp4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-2tnp4 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002a86800} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002a86820}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 00:06:52 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:06:52.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-2019" for this suite.
Feb 18 00:07:00.390: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:07:00.477: INFO: namespace deployment-2019 deletion completed in 8.142791601s

• [SLOW TEST:13.545 seconds]
[sig-apps] Deployment
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:07:00.477: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 18 00:07:00.613: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4b3fb2e4-0d95-4e69-b7aa-02342a0f3cc6" in namespace "downward-api-7160" to be "success or failure"
Feb 18 00:07:00.622: INFO: Pod "downwardapi-volume-4b3fb2e4-0d95-4e69-b7aa-02342a0f3cc6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.456472ms
Feb 18 00:07:02.627: INFO: Pod "downwardapi-volume-4b3fb2e4-0d95-4e69-b7aa-02342a0f3cc6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013285357s
Feb 18 00:07:04.631: INFO: Pod "downwardapi-volume-4b3fb2e4-0d95-4e69-b7aa-02342a0f3cc6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0177845s
STEP: Saw pod success
Feb 18 00:07:04.631: INFO: Pod "downwardapi-volume-4b3fb2e4-0d95-4e69-b7aa-02342a0f3cc6" satisfied condition "success or failure"
Feb 18 00:07:04.634: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-4b3fb2e4-0d95-4e69-b7aa-02342a0f3cc6 container client-container: 
STEP: delete the pod
Feb 18 00:07:04.714: INFO: Waiting for pod downwardapi-volume-4b3fb2e4-0d95-4e69-b7aa-02342a0f3cc6 to disappear
Feb 18 00:07:04.742: INFO: Pod downwardapi-volume-4b3fb2e4-0d95-4e69-b7aa-02342a0f3cc6 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:07:04.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7160" for this suite.
Feb 18 00:07:12.769: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:07:13.113: INFO: namespace downward-api-7160 deletion completed in 8.366724071s

• [SLOW TEST:12.636 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:07:13.114: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should rollback without unnecessary restarts [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 18 00:07:14.252: INFO: Create a RollingUpdate DaemonSet
Feb 18 00:07:14.255: INFO: Check that daemon pods launch on every node of the cluster
Feb 18 00:07:14.438: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 18 00:07:14.492: INFO: Number of nodes with available pods: 0
Feb 18 00:07:14.492: INFO: Node iruya-worker is running more than one daemon pod
Feb 18 00:07:15.498: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 18 00:07:15.501: INFO: Number of nodes with available pods: 0
Feb 18 00:07:15.501: INFO: Node iruya-worker is running more than one daemon pod
Feb 18 00:07:16.497: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 18 00:07:16.500: INFO: Number of nodes with available pods: 0
Feb 18 00:07:16.500: INFO: Node iruya-worker is running more than one daemon pod
Feb 18 00:07:17.496: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 18 00:07:17.498: INFO: Number of nodes with available pods: 0
Feb 18 00:07:17.498: INFO: Node iruya-worker is running more than one daemon pod
Feb 18 00:07:18.497: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 18 00:07:18.499: INFO: Number of nodes with available pods: 1
Feb 18 00:07:18.499: INFO: Node iruya-worker2 is running more than one daemon pod
Feb 18 00:07:19.497: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 18 00:07:19.500: INFO: Number of nodes with available pods: 2
Feb 18 00:07:19.500: INFO: Number of running nodes: 2, number of available pods: 2
Feb 18 00:07:19.500: INFO: Update the DaemonSet to trigger a rollout
Feb 18 00:07:19.507: INFO: Updating DaemonSet daemon-set
Feb 18 00:07:23.834: INFO: Roll back the DaemonSet before rollout is complete
Feb 18 00:07:23.840: INFO: Updating DaemonSet daemon-set
Feb 18 00:07:23.840: INFO: Make sure DaemonSet rollback is complete
Feb 18 00:07:23.843: INFO: Wrong image for pod: daemon-set-2nzgc. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Feb 18 00:07:23.843: INFO: Pod daemon-set-2nzgc is not available
Feb 18 00:07:23.849: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 18 00:07:24.853: INFO: Wrong image for pod: daemon-set-2nzgc. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Feb 18 00:07:24.853: INFO: Pod daemon-set-2nzgc is not available
Feb 18 00:07:24.857: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 18 00:07:25.854: INFO: Wrong image for pod: daemon-set-2nzgc. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Feb 18 00:07:25.854: INFO: Pod daemon-set-2nzgc is not available
Feb 18 00:07:25.858: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 18 00:07:26.868: INFO: Wrong image for pod: daemon-set-2nzgc. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Feb 18 00:07:26.868: INFO: Pod daemon-set-2nzgc is not available
Feb 18 00:07:26.872: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 18 00:07:27.854: INFO: Wrong image for pod: daemon-set-2nzgc. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Feb 18 00:07:27.854: INFO: Pod daemon-set-2nzgc is not available
Feb 18 00:07:27.858: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 18 00:07:28.856: INFO: Wrong image for pod: daemon-set-2nzgc. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Feb 18 00:07:28.856: INFO: Pod daemon-set-2nzgc is not available
Feb 18 00:07:28.860: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 18 00:07:29.853: INFO: Wrong image for pod: daemon-set-2nzgc. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Feb 18 00:07:29.853: INFO: Pod daemon-set-2nzgc is not available
Feb 18 00:07:29.858: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 18 00:07:30.854: INFO: Pod daemon-set-2rsrj is not available
Feb 18 00:07:30.876: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7686, will wait for the garbage collector to delete the pods
Feb 18 00:07:30.976: INFO: Deleting DaemonSet.extensions daemon-set took: 7.36536ms
Feb 18 00:07:31.276: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.25348ms
Feb 18 00:07:42.680: INFO: Number of nodes with available pods: 0
Feb 18 00:07:42.680: INFO: Number of running nodes: 0, number of available pods: 0
Feb 18 00:07:42.682: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7686/daemonsets","resourceVersion":"6950715"},"items":null}

Feb 18 00:07:42.685: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7686/pods","resourceVersion":"6950715"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:07:42.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-7686" for this suite.
Feb 18 00:07:48.704: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:07:48.799: INFO: namespace daemonsets-7686 deletion completed in 6.104812317s

• [SLOW TEST:35.685 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:07:48.800: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-f1f766de-dcfe-4106-98bc-fba3a9ab7107
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-f1f766de-dcfe-4106-98bc-fba3a9ab7107
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:07:56.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3194" for this suite.
Feb 18 00:08:19.002: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:08:19.099: INFO: namespace configmap-3194 deletion completed in 22.114048448s

• [SLOW TEST:30.299 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:08:19.100: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Feb 18 00:08:27.296: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 18 00:08:27.329: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 18 00:08:29.330: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 18 00:08:29.334: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 18 00:08:31.330: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 18 00:08:31.334: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 18 00:08:33.330: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 18 00:08:33.334: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 18 00:08:35.330: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 18 00:08:35.334: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 18 00:08:37.330: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 18 00:08:37.334: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 18 00:08:39.330: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 18 00:08:39.334: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 18 00:08:41.330: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 18 00:08:41.334: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 18 00:08:43.330: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 18 00:08:43.334: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 18 00:08:45.330: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 18 00:08:45.348: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 18 00:08:47.330: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 18 00:08:47.334: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:08:47.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-2738" for this suite.
Feb 18 00:09:05.356: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:09:05.440: INFO: namespace container-lifecycle-hook-2738 deletion completed in 18.101595825s

• [SLOW TEST:46.340 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:09:05.441: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-projected-h96c
STEP: Creating a pod to test atomic-volume-subpath
Feb 18 00:09:05.565: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-h96c" in namespace "subpath-7654" to be "success or failure"
Feb 18 00:09:05.641: INFO: Pod "pod-subpath-test-projected-h96c": Phase="Pending", Reason="", readiness=false. Elapsed: 76.49346ms
Feb 18 00:09:07.725: INFO: Pod "pod-subpath-test-projected-h96c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.160138336s
Feb 18 00:09:09.729: INFO: Pod "pod-subpath-test-projected-h96c": Phase="Running", Reason="", readiness=true. Elapsed: 4.163612162s
Feb 18 00:09:11.733: INFO: Pod "pod-subpath-test-projected-h96c": Phase="Running", Reason="", readiness=true. Elapsed: 6.16753222s
Feb 18 00:09:13.737: INFO: Pod "pod-subpath-test-projected-h96c": Phase="Running", Reason="", readiness=true. Elapsed: 8.172010204s
Feb 18 00:09:15.740: INFO: Pod "pod-subpath-test-projected-h96c": Phase="Running", Reason="", readiness=true. Elapsed: 10.174982027s
Feb 18 00:09:17.744: INFO: Pod "pod-subpath-test-projected-h96c": Phase="Running", Reason="", readiness=true. Elapsed: 12.179178576s
Feb 18 00:09:19.747: INFO: Pod "pod-subpath-test-projected-h96c": Phase="Running", Reason="", readiness=true. Elapsed: 14.182472826s
Feb 18 00:09:21.751: INFO: Pod "pod-subpath-test-projected-h96c": Phase="Running", Reason="", readiness=true. Elapsed: 16.185937721s
Feb 18 00:09:23.755: INFO: Pod "pod-subpath-test-projected-h96c": Phase="Running", Reason="", readiness=true. Elapsed: 18.189714005s
Feb 18 00:09:25.785: INFO: Pod "pod-subpath-test-projected-h96c": Phase="Running", Reason="", readiness=true. Elapsed: 20.219830732s
Feb 18 00:09:27.789: INFO: Pod "pod-subpath-test-projected-h96c": Phase="Running", Reason="", readiness=true. Elapsed: 22.223979133s
Feb 18 00:09:29.793: INFO: Pod "pod-subpath-test-projected-h96c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.227977446s
STEP: Saw pod success
Feb 18 00:09:29.793: INFO: Pod "pod-subpath-test-projected-h96c" satisfied condition "success or failure"
Feb 18 00:09:29.795: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-projected-h96c container test-container-subpath-projected-h96c: 
STEP: delete the pod
Feb 18 00:09:29.859: INFO: Waiting for pod pod-subpath-test-projected-h96c to disappear
Feb 18 00:09:29.925: INFO: Pod pod-subpath-test-projected-h96c no longer exists
STEP: Deleting pod pod-subpath-test-projected-h96c
Feb 18 00:09:29.925: INFO: Deleting pod "pod-subpath-test-projected-h96c" in namespace "subpath-7654"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:09:29.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-7654" for this suite.
Feb 18 00:09:36.034: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:09:36.131: INFO: namespace subpath-7654 deletion completed in 6.199438149s

• [SLOW TEST:30.691 seconds]
[sig-storage] Subpath
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:09:36.132: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-2699dc02-f7e7-4987-ad77-58ab9394623b
STEP: Creating a pod to test consume secrets
Feb 18 00:09:36.253: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ca7b2bd9-3f1f-4e07-aea4-d8723f8fea0b" in namespace "projected-5523" to be "success or failure"
Feb 18 00:09:36.258: INFO: Pod "pod-projected-secrets-ca7b2bd9-3f1f-4e07-aea4-d8723f8fea0b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.616053ms
Feb 18 00:09:38.262: INFO: Pod "pod-projected-secrets-ca7b2bd9-3f1f-4e07-aea4-d8723f8fea0b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008692183s
Feb 18 00:09:40.266: INFO: Pod "pod-projected-secrets-ca7b2bd9-3f1f-4e07-aea4-d8723f8fea0b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012703909s
STEP: Saw pod success
Feb 18 00:09:40.266: INFO: Pod "pod-projected-secrets-ca7b2bd9-3f1f-4e07-aea4-d8723f8fea0b" satisfied condition "success or failure"
Feb 18 00:09:40.269: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-ca7b2bd9-3f1f-4e07-aea4-d8723f8fea0b container projected-secret-volume-test: 
STEP: delete the pod
Feb 18 00:09:40.327: INFO: Waiting for pod pod-projected-secrets-ca7b2bd9-3f1f-4e07-aea4-d8723f8fea0b to disappear
Feb 18 00:09:40.335: INFO: Pod pod-projected-secrets-ca7b2bd9-3f1f-4e07-aea4-d8723f8fea0b no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:09:40.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5523" for this suite.
Feb 18 00:09:46.381: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:09:46.476: INFO: namespace projected-5523 deletion completed in 6.137257497s

• [SLOW TEST:10.344 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:09:46.476: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-bd0c6ddb-0660-49a6-a1ae-52c1297df1f6
STEP: Creating a pod to test consume secrets
Feb 18 00:09:46.547: INFO: Waiting up to 5m0s for pod "pod-secrets-a97fc208-2094-43ba-9c28-bec04231e927" in namespace "secrets-6268" to be "success or failure"
Feb 18 00:09:46.551: INFO: Pod "pod-secrets-a97fc208-2094-43ba-9c28-bec04231e927": Phase="Pending", Reason="", readiness=false. Elapsed: 4.198536ms
Feb 18 00:09:48.555: INFO: Pod "pod-secrets-a97fc208-2094-43ba-9c28-bec04231e927": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008148487s
Feb 18 00:09:50.559: INFO: Pod "pod-secrets-a97fc208-2094-43ba-9c28-bec04231e927": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011322507s
STEP: Saw pod success
Feb 18 00:09:50.559: INFO: Pod "pod-secrets-a97fc208-2094-43ba-9c28-bec04231e927" satisfied condition "success or failure"
Feb 18 00:09:50.560: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-a97fc208-2094-43ba-9c28-bec04231e927 container secret-volume-test: 
STEP: delete the pod
Feb 18 00:09:50.696: INFO: Waiting for pod pod-secrets-a97fc208-2094-43ba-9c28-bec04231e927 to disappear
Feb 18 00:09:50.713: INFO: Pod pod-secrets-a97fc208-2094-43ba-9c28-bec04231e927 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:09:50.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6268" for this suite.
Feb 18 00:09:56.845: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:09:56.927: INFO: namespace secrets-6268 deletion completed in 6.211310539s

• [SLOW TEST:10.451 seconds]
[sig-storage] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:09:56.929: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod busybox-580ec649-2af9-44e9-bb74-426f648c6633 in namespace container-probe-4565
Feb 18 00:10:01.139: INFO: Started pod busybox-580ec649-2af9-44e9-bb74-426f648c6633 in namespace container-probe-4565
STEP: checking the pod's current state and verifying that restartCount is present
Feb 18 00:10:01.143: INFO: Initial restart count of pod busybox-580ec649-2af9-44e9-bb74-426f648c6633 is 0
Feb 18 00:10:55.799: INFO: Restart count of pod container-probe-4565/busybox-580ec649-2af9-44e9-bb74-426f648c6633 is now 1 (54.656397762s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:10:55.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-4565" for this suite.
Feb 18 00:11:01.933: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:11:02.047: INFO: namespace container-probe-4565 deletion completed in 6.128135874s

• [SLOW TEST:65.118 seconds]
[k8s.io] Probing container
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:11:02.047: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod busybox-d5bfefcb-5897-400e-bc09-fe174d47d156 in namespace container-probe-4444
Feb 18 00:11:06.132: INFO: Started pod busybox-d5bfefcb-5897-400e-bc09-fe174d47d156 in namespace container-probe-4444
STEP: checking the pod's current state and verifying that restartCount is present
Feb 18 00:11:06.135: INFO: Initial restart count of pod busybox-d5bfefcb-5897-400e-bc09-fe174d47d156 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:15:07.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-4444" for this suite.
Feb 18 00:15:13.181: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:15:13.274: INFO: namespace container-probe-4444 deletion completed in 6.106282666s

• [SLOW TEST:251.227 seconds]
[k8s.io] Probing container
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:15:13.274: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:16:13.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-3917" for this suite.
Feb 18 00:16:35.384: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:16:35.514: INFO: namespace container-probe-3917 deletion completed in 22.176298583s

• [SLOW TEST:82.240 seconds]
[k8s.io] Probing container
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:16:35.515: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl replace
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721
[It] should update a single-container pod's image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 18 00:16:35.567: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-1566'
Feb 18 00:16:38.693: INFO: stderr: ""
Feb 18 00:16:38.693: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Feb 18 00:16:43.743: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-1566 -o json'
Feb 18 00:16:43.831: INFO: stderr: ""
Feb 18 00:16:43.831: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2021-02-18T00:16:38Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"kubectl-1566\",\n        \"resourceVersion\": \"6952046\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-1566/pods/e2e-test-nginx-pod\",\n        \"uid\": \"baad9819-a908-46ca-8bfe-3241537b1894\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-hghdd\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"iruya-worker\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-hghdd\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-hghdd\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2021-02-18T00:16:38Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2021-02-18T00:16:42Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2021-02-18T00:16:42Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2021-02-18T00:16:38Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"containerd://037902b06b8022ffb503c3fce6f88b9505dd75a8b2a6ad8e7b6b6580d2138457\",\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2021-02-18T00:16:42Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"172.18.0.3\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.244.1.193\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2021-02-18T00:16:38Z\"\n    }\n}\n"
STEP: replace the image in the pod
Feb 18 00:16:43.831: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-1566'
Feb 18 00:16:44.447: INFO: stderr: ""
Feb 18 00:16:44.447: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726
Feb 18 00:16:44.449: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-1566'
Feb 18 00:16:50.811: INFO: stderr: ""
Feb 18 00:16:50.811: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:16:50.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1566" for this suite.
Feb 18 00:16:56.825: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:16:56.919: INFO: namespace kubectl-1566 deletion completed in 6.103962836s

• [SLOW TEST:21.404 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl replace
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update a single-container pod's image  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:16:56.919: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Feb 18 00:16:56.986: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb 18 00:16:57.000: INFO: Waiting for terminating namespaces to be deleted...
Feb 18 00:16:57.003: INFO: 
Logging pods the kubelet thinks is on node iruya-worker before test
Feb 18 00:16:57.011: INFO: coredns-5d4dd4b4db-69khc from kube-system started at 2021-01-10 17:26:03 +0000 UTC (1 container statuses recorded)
Feb 18 00:16:57.011: INFO: 	Container coredns ready: true, restart count 1
Feb 18 00:16:57.011: INFO: kube-proxy-24ww6 from kube-system started at 2021-01-10 17:25:00 +0000 UTC (1 container statuses recorded)
Feb 18 00:16:57.011: INFO: 	Container kube-proxy ready: true, restart count 1
Feb 18 00:16:57.011: INFO: chaos-controller-manager-6c68f56f79-2j2xr from default started at 2021-01-11 03:53:47 +0000 UTC (1 container statuses recorded)
Feb 18 00:16:57.011: INFO: 	Container chaos-mesh ready: true, restart count 2
Feb 18 00:16:57.011: INFO: local-path-provisioner-7f465859dc-zj67c from local-path-storage started at 2021-01-10 17:26:02 +0000 UTC (1 container statuses recorded)
Feb 18 00:16:57.011: INFO: 	Container local-path-provisioner ready: true, restart count 7
Feb 18 00:16:57.011: INFO: kindnet-vgcd6 from kube-system started at 2021-01-10 17:25:04 +0000 UTC (1 container statuses recorded)
Feb 18 00:16:57.011: INFO: 	Container kindnet-cni ready: true, restart count 1
Feb 18 00:16:57.011: INFO: chaos-daemon-s74sn from default started at 2021-01-11 03:53:47 +0000 UTC (1 container statuses recorded)
Feb 18 00:16:57.011: INFO: 	Container chaos-daemon ready: true, restart count 1
Feb 18 00:16:57.011: INFO: coredns-5d4dd4b4db-b9gp2 from kube-system started at 2021-01-10 17:25:57 +0000 UTC (1 container statuses recorded)
Feb 18 00:16:57.011: INFO: 	Container coredns ready: true, restart count 1
Feb 18 00:16:57.011: INFO: 
Logging pods the kubelet thinks is on node iruya-worker2 before test
Feb 18 00:16:57.017: INFO: chaos-daemon-7gq5t from default started at 2021-01-11 03:53:47 +0000 UTC (1 container statuses recorded)
Feb 18 00:16:57.017: INFO: 	Container chaos-daemon ready: true, restart count 1
Feb 18 00:16:57.017: INFO: kindnet-gbtx5 from kube-system started at 2021-01-10 17:25:04 +0000 UTC (1 container statuses recorded)
Feb 18 00:16:57.017: INFO: 	Container kindnet-cni ready: true, restart count 2
Feb 18 00:16:57.017: INFO: kube-proxy-h6zb5 from kube-system started at 2021-01-10 17:25:00 +0000 UTC (1 container statuses recorded)
Feb 18 00:16:57.017: INFO: 	Container kube-proxy ready: true, restart count 1
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: verifying the node has the label node iruya-worker
STEP: verifying the node has the label node iruya-worker2
Feb 18 00:16:57.104: INFO: Pod chaos-controller-manager-6c68f56f79-2j2xr requesting resource cpu=25m on Node iruya-worker
Feb 18 00:16:57.104: INFO: Pod chaos-daemon-7gq5t requesting resource cpu=0m on Node iruya-worker2
Feb 18 00:16:57.104: INFO: Pod chaos-daemon-s74sn requesting resource cpu=0m on Node iruya-worker
Feb 18 00:16:57.104: INFO: Pod coredns-5d4dd4b4db-69khc requesting resource cpu=100m on Node iruya-worker
Feb 18 00:16:57.104: INFO: Pod coredns-5d4dd4b4db-b9gp2 requesting resource cpu=100m on Node iruya-worker
Feb 18 00:16:57.104: INFO: Pod kindnet-gbtx5 requesting resource cpu=100m on Node iruya-worker2
Feb 18 00:16:57.104: INFO: Pod kindnet-vgcd6 requesting resource cpu=100m on Node iruya-worker
Feb 18 00:16:57.104: INFO: Pod kube-proxy-24ww6 requesting resource cpu=0m on Node iruya-worker
Feb 18 00:16:57.104: INFO: Pod kube-proxy-h6zb5 requesting resource cpu=0m on Node iruya-worker2
Feb 18 00:16:57.104: INFO: Pod local-path-provisioner-7f465859dc-zj67c requesting resource cpu=0m on Node iruya-worker
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-bb9fdc5e-302b-4cf1-893d-634f53f516e8.1664af65899cd336], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5903/filler-pod-bb9fdc5e-302b-4cf1-893d-634f53f516e8 to iruya-worker2]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-bb9fdc5e-302b-4cf1-893d-634f53f516e8.1664af65ebbdddb8], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-bb9fdc5e-302b-4cf1-893d-634f53f516e8.1664af668d79b2f5], Reason = [Created], Message = [Created container filler-pod-bb9fdc5e-302b-4cf1-893d-634f53f516e8]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-bb9fdc5e-302b-4cf1-893d-634f53f516e8.1664af669c872341], Reason = [Started], Message = [Started container filler-pod-bb9fdc5e-302b-4cf1-893d-634f53f516e8]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-f728b517-f425-42c1-b182-bd5752f95af0.1664af658823f780], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5903/filler-pod-f728b517-f425-42c1-b182-bd5752f95af0 to iruya-worker]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-f728b517-f425-42c1-b182-bd5752f95af0.1664af65d3468f7e], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-f728b517-f425-42c1-b182-bd5752f95af0.1664af667e1dc7ae], Reason = [Created], Message = [Created container filler-pod-f728b517-f425-42c1-b182-bd5752f95af0]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-f728b517-f425-42c1-b182-bd5752f95af0.1664af6695be9cc6], Reason = [Started], Message = [Started container filler-pod-f728b517-f425-42c1-b182-bd5752f95af0]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.1664af66f124a65a], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.]
STEP: removing the label node off the node iruya-worker
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node iruya-worker2
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:17:04.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-5903" for this suite.
Feb 18 00:17:11.008: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:17:11.109: INFO: namespace sched-pred-5903 deletion completed in 6.256853211s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:14.190 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates resource limits of pods that are allowed to run  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:17:11.109: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 18 00:17:11.305: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a70592e3-4425-4b72-9874-87c834d89a53" in namespace "projected-944" to be "success or failure"
Feb 18 00:17:11.526: INFO: Pod "downwardapi-volume-a70592e3-4425-4b72-9874-87c834d89a53": Phase="Pending", Reason="", readiness=false. Elapsed: 220.985059ms
Feb 18 00:17:13.530: INFO: Pod "downwardapi-volume-a70592e3-4425-4b72-9874-87c834d89a53": Phase="Pending", Reason="", readiness=false. Elapsed: 2.225278825s
Feb 18 00:17:15.534: INFO: Pod "downwardapi-volume-a70592e3-4425-4b72-9874-87c834d89a53": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.229543281s
STEP: Saw pod success
Feb 18 00:17:15.535: INFO: Pod "downwardapi-volume-a70592e3-4425-4b72-9874-87c834d89a53" satisfied condition "success or failure"
Feb 18 00:17:15.538: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-a70592e3-4425-4b72-9874-87c834d89a53 container client-container: 
STEP: delete the pod
Feb 18 00:17:15.561: INFO: Waiting for pod downwardapi-volume-a70592e3-4425-4b72-9874-87c834d89a53 to disappear
Feb 18 00:17:15.565: INFO: Pod downwardapi-volume-a70592e3-4425-4b72-9874-87c834d89a53 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:17:15.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-944" for this suite.
Feb 18 00:17:21.633: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:17:21.732: INFO: namespace projected-944 deletion completed in 6.16435731s

• [SLOW TEST:10.623 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:17:21.732: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create and stop a replication controller  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Feb 18 00:17:21.819: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4463'
Feb 18 00:17:22.094: INFO: stderr: ""
Feb 18 00:17:22.094: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 18 00:17:22.094: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4463'
Feb 18 00:17:22.503: INFO: stderr: ""
Feb 18 00:17:22.503: INFO: stdout: "update-demo-nautilus-chvgm update-demo-nautilus-w4nmz "
Feb 18 00:17:22.503: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-chvgm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4463'
Feb 18 00:17:22.688: INFO: stderr: ""
Feb 18 00:17:22.688: INFO: stdout: ""
Feb 18 00:17:22.688: INFO: update-demo-nautilus-chvgm is created but not running
Feb 18 00:17:27.688: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4463'
Feb 18 00:17:27.800: INFO: stderr: ""
Feb 18 00:17:27.800: INFO: stdout: "update-demo-nautilus-chvgm update-demo-nautilus-w4nmz "
Feb 18 00:17:27.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-chvgm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4463'
Feb 18 00:17:27.886: INFO: stderr: ""
Feb 18 00:17:27.886: INFO: stdout: "true"
Feb 18 00:17:27.886: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-chvgm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4463'
Feb 18 00:17:27.965: INFO: stderr: ""
Feb 18 00:17:27.965: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 18 00:17:27.965: INFO: validating pod update-demo-nautilus-chvgm
Feb 18 00:17:27.979: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 18 00:17:27.979: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 18 00:17:27.979: INFO: update-demo-nautilus-chvgm is verified up and running
Feb 18 00:17:27.979: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-w4nmz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4463'
Feb 18 00:17:28.062: INFO: stderr: ""
Feb 18 00:17:28.062: INFO: stdout: "true"
Feb 18 00:17:28.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-w4nmz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4463'
Feb 18 00:17:28.155: INFO: stderr: ""
Feb 18 00:17:28.155: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 18 00:17:28.155: INFO: validating pod update-demo-nautilus-w4nmz
Feb 18 00:17:28.170: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 18 00:17:28.170: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 18 00:17:28.170: INFO: update-demo-nautilus-w4nmz is verified up and running
STEP: using delete to clean up resources
Feb 18 00:17:28.170: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4463'
Feb 18 00:17:28.290: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 18 00:17:28.290: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Feb 18 00:17:28.290: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4463'
Feb 18 00:17:28.374: INFO: stderr: "No resources found.\n"
Feb 18 00:17:28.374: INFO: stdout: ""
Feb 18 00:17:28.374: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4463 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb 18 00:17:28.464: INFO: stderr: ""
Feb 18 00:17:28.464: INFO: stdout: "update-demo-nautilus-chvgm\nupdate-demo-nautilus-w4nmz\n"
Feb 18 00:17:28.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4463'
Feb 18 00:17:29.166: INFO: stderr: "No resources found.\n"
Feb 18 00:17:29.166: INFO: stdout: ""
Feb 18 00:17:29.166: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4463 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb 18 00:17:29.266: INFO: stderr: ""
Feb 18 00:17:29.266: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:17:29.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4463" for this suite.
Feb 18 00:17:53.287: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:17:53.382: INFO: namespace kubectl-4463 deletion completed in 24.111431521s

• [SLOW TEST:31.650 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a replication controller  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:17:53.382: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating secret secrets-6732/secret-test-d46ec9f5-8ce0-449f-8ec4-923097fe8c03
STEP: Creating a pod to test consume secrets
Feb 18 00:17:53.492: INFO: Waiting up to 5m0s for pod "pod-configmaps-e2d77ef8-15ea-479a-8903-403514008b11" in namespace "secrets-6732" to be "success or failure"
Feb 18 00:17:53.500: INFO: Pod "pod-configmaps-e2d77ef8-15ea-479a-8903-403514008b11": Phase="Pending", Reason="", readiness=false. Elapsed: 8.183728ms
Feb 18 00:17:55.514: INFO: Pod "pod-configmaps-e2d77ef8-15ea-479a-8903-403514008b11": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022461997s
Feb 18 00:17:57.518: INFO: Pod "pod-configmaps-e2d77ef8-15ea-479a-8903-403514008b11": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026733958s
STEP: Saw pod success
Feb 18 00:17:57.519: INFO: Pod "pod-configmaps-e2d77ef8-15ea-479a-8903-403514008b11" satisfied condition "success or failure"
Feb 18 00:17:57.522: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-e2d77ef8-15ea-479a-8903-403514008b11 container env-test: 
STEP: delete the pod
Feb 18 00:17:57.700: INFO: Waiting for pod pod-configmaps-e2d77ef8-15ea-479a-8903-403514008b11 to disappear
Feb 18 00:17:57.716: INFO: Pod pod-configmaps-e2d77ef8-15ea-479a-8903-403514008b11 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:17:57.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6732" for this suite.
Feb 18 00:18:03.731: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:18:03.824: INFO: namespace secrets-6732 deletion completed in 6.104509066s

• [SLOW TEST:10.441 seconds]
[sig-api-machinery] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:18:03.824: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 18 00:18:03.924: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7adf7544-e9c1-4d5f-94af-bf6459bcfe5a" in namespace "downward-api-1125" to be "success or failure"
Feb 18 00:18:03.931: INFO: Pod "downwardapi-volume-7adf7544-e9c1-4d5f-94af-bf6459bcfe5a": Phase="Pending", Reason="", readiness=false. Elapsed: 7.340201ms
Feb 18 00:18:05.935: INFO: Pod "downwardapi-volume-7adf7544-e9c1-4d5f-94af-bf6459bcfe5a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011472655s
Feb 18 00:18:07.940: INFO: Pod "downwardapi-volume-7adf7544-e9c1-4d5f-94af-bf6459bcfe5a": Phase="Running", Reason="", readiness=true. Elapsed: 4.015827212s
Feb 18 00:18:09.943: INFO: Pod "downwardapi-volume-7adf7544-e9c1-4d5f-94af-bf6459bcfe5a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.019514994s
STEP: Saw pod success
Feb 18 00:18:09.943: INFO: Pod "downwardapi-volume-7adf7544-e9c1-4d5f-94af-bf6459bcfe5a" satisfied condition "success or failure"
Feb 18 00:18:09.946: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-7adf7544-e9c1-4d5f-94af-bf6459bcfe5a container client-container: 
STEP: delete the pod
Feb 18 00:18:10.013: INFO: Waiting for pod downwardapi-volume-7adf7544-e9c1-4d5f-94af-bf6459bcfe5a to disappear
Feb 18 00:18:10.027: INFO: Pod downwardapi-volume-7adf7544-e9c1-4d5f-94af-bf6459bcfe5a no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:18:10.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1125" for this suite.
Feb 18 00:18:16.049: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:18:16.166: INFO: namespace downward-api-1125 deletion completed in 6.135519568s

• [SLOW TEST:12.342 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:18:16.167: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create a job from an image, then delete the job  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: executing a command with run --rm and attach with stdin
Feb 18 00:18:16.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9538 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Feb 18 00:18:19.438: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\n"
Feb 18 00:18:19.439: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:18:21.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9538" for this suite.
Feb 18 00:18:27.469: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:18:27.552: INFO: namespace kubectl-9538 deletion completed in 6.104089742s

• [SLOW TEST:11.385 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run --rm job
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image, then delete the job  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:18:27.552: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-secret-f97w
STEP: Creating a pod to test atomic-volume-subpath
Feb 18 00:18:27.648: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-f97w" in namespace "subpath-5292" to be "success or failure"
Feb 18 00:18:27.675: INFO: Pod "pod-subpath-test-secret-f97w": Phase="Pending", Reason="", readiness=false. Elapsed: 27.735074ms
Feb 18 00:18:29.679: INFO: Pod "pod-subpath-test-secret-f97w": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03175738s
Feb 18 00:18:31.684: INFO: Pod "pod-subpath-test-secret-f97w": Phase="Running", Reason="", readiness=true. Elapsed: 4.036076561s
Feb 18 00:18:33.687: INFO: Pod "pod-subpath-test-secret-f97w": Phase="Running", Reason="", readiness=true. Elapsed: 6.039661645s
Feb 18 00:18:35.691: INFO: Pod "pod-subpath-test-secret-f97w": Phase="Running", Reason="", readiness=true. Elapsed: 8.043699282s
Feb 18 00:18:37.695: INFO: Pod "pod-subpath-test-secret-f97w": Phase="Running", Reason="", readiness=true. Elapsed: 10.047577568s
Feb 18 00:18:39.699: INFO: Pod "pod-subpath-test-secret-f97w": Phase="Running", Reason="", readiness=true. Elapsed: 12.051308743s
Feb 18 00:18:41.703: INFO: Pod "pod-subpath-test-secret-f97w": Phase="Running", Reason="", readiness=true. Elapsed: 14.054943744s
Feb 18 00:18:43.706: INFO: Pod "pod-subpath-test-secret-f97w": Phase="Running", Reason="", readiness=true. Elapsed: 16.058083962s
Feb 18 00:18:45.710: INFO: Pod "pod-subpath-test-secret-f97w": Phase="Running", Reason="", readiness=true. Elapsed: 18.062559593s
Feb 18 00:18:47.714: INFO: Pod "pod-subpath-test-secret-f97w": Phase="Running", Reason="", readiness=true. Elapsed: 20.066618526s
Feb 18 00:18:49.718: INFO: Pod "pod-subpath-test-secret-f97w": Phase="Running", Reason="", readiness=true. Elapsed: 22.070860397s
Feb 18 00:18:51.722: INFO: Pod "pod-subpath-test-secret-f97w": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.074716659s
STEP: Saw pod success
Feb 18 00:18:51.722: INFO: Pod "pod-subpath-test-secret-f97w" satisfied condition "success or failure"
Feb 18 00:18:51.725: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-secret-f97w container test-container-subpath-secret-f97w: 
STEP: delete the pod
Feb 18 00:18:51.876: INFO: Waiting for pod pod-subpath-test-secret-f97w to disappear
Feb 18 00:18:51.890: INFO: Pod pod-subpath-test-secret-f97w no longer exists
STEP: Deleting pod pod-subpath-test-secret-f97w
Feb 18 00:18:51.890: INFO: Deleting pod "pod-subpath-test-secret-f97w" in namespace "subpath-5292"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:18:51.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-5292" for this suite.
Feb 18 00:18:57.923: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:18:58.013: INFO: namespace subpath-5292 deletion completed in 6.118361745s

• [SLOW TEST:30.461 seconds]
[sig-storage] Subpath
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:18:58.014: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:19:02.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-2154" for this suite.
Feb 18 00:19:08.171: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:19:08.266: INFO: namespace kubelet-test-2154 deletion completed in 6.139011837s

• [SLOW TEST:10.253 seconds]
[k8s.io] Kubelet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:19:08.267: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0218 00:19:38.943957       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 18 00:19:38.944: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:19:38.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-9123" for this suite.
Feb 18 00:19:45.078: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:19:45.234: INFO: namespace gc-9123 deletion completed in 6.288077475s

• [SLOW TEST:36.968 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:19:45.235: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 18 00:19:45.539: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e51fd7f1-0de8-480d-85a4-6f4b1fd5113d" in namespace "projected-4200" to be "success or failure"
Feb 18 00:19:45.565: INFO: Pod "downwardapi-volume-e51fd7f1-0de8-480d-85a4-6f4b1fd5113d": Phase="Pending", Reason="", readiness=false. Elapsed: 26.038718ms
Feb 18 00:19:47.587: INFO: Pod "downwardapi-volume-e51fd7f1-0de8-480d-85a4-6f4b1fd5113d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048251199s
Feb 18 00:19:49.591: INFO: Pod "downwardapi-volume-e51fd7f1-0de8-480d-85a4-6f4b1fd5113d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052074649s
STEP: Saw pod success
Feb 18 00:19:49.591: INFO: Pod "downwardapi-volume-e51fd7f1-0de8-480d-85a4-6f4b1fd5113d" satisfied condition "success or failure"
Feb 18 00:19:49.594: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-e51fd7f1-0de8-480d-85a4-6f4b1fd5113d container client-container: 
STEP: delete the pod
Feb 18 00:19:49.660: INFO: Waiting for pod downwardapi-volume-e51fd7f1-0de8-480d-85a4-6f4b1fd5113d to disappear
Feb 18 00:19:49.671: INFO: Pod downwardapi-volume-e51fd7f1-0de8-480d-85a4-6f4b1fd5113d no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:19:49.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4200" for this suite.
Feb 18 00:19:55.725: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:19:55.807: INFO: namespace projected-4200 deletion completed in 6.131881819s

• [SLOW TEST:10.572 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:19:55.807: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Feb 18 00:20:10.141: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6550 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 18 00:20:10.141: INFO: >>> kubeConfig: /root/.kube/config
Feb 18 00:20:10.246: INFO: Exec stderr: ""
Feb 18 00:20:10.246: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6550 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 18 00:20:10.246: INFO: >>> kubeConfig: /root/.kube/config
Feb 18 00:20:10.341: INFO: Exec stderr: ""
Feb 18 00:20:10.341: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6550 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 18 00:20:10.341: INFO: >>> kubeConfig: /root/.kube/config
Feb 18 00:20:10.440: INFO: Exec stderr: ""
Feb 18 00:20:10.440: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6550 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 18 00:20:10.440: INFO: >>> kubeConfig: /root/.kube/config
Feb 18 00:20:10.551: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Feb 18 00:20:10.551: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6550 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 18 00:20:10.551: INFO: >>> kubeConfig: /root/.kube/config
Feb 18 00:20:10.632: INFO: Exec stderr: ""
Feb 18 00:20:10.632: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6550 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 18 00:20:10.632: INFO: >>> kubeConfig: /root/.kube/config
Feb 18 00:20:10.786: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Feb 18 00:20:10.786: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6550 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 18 00:20:10.786: INFO: >>> kubeConfig: /root/.kube/config
Feb 18 00:20:10.876: INFO: Exec stderr: ""
Feb 18 00:20:10.876: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6550 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 18 00:20:10.876: INFO: >>> kubeConfig: /root/.kube/config
Feb 18 00:20:10.984: INFO: Exec stderr: ""
Feb 18 00:20:10.984: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6550 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 18 00:20:10.984: INFO: >>> kubeConfig: /root/.kube/config
Feb 18 00:20:11.082: INFO: Exec stderr: ""
Feb 18 00:20:11.082: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6550 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 18 00:20:11.082: INFO: >>> kubeConfig: /root/.kube/config
Feb 18 00:20:11.261: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:20:11.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-6550" for this suite.
Feb 18 00:21:03.284: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:21:03.503: INFO: namespace e2e-kubelet-etc-hosts-6550 deletion completed in 52.237325215s

• [SLOW TEST:67.696 seconds]
[k8s.io] KubeletManagedEtcHosts
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:21:03.503: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test use defaults
Feb 18 00:21:03.570: INFO: Waiting up to 5m0s for pod "client-containers-19d07486-3cf5-46ae-b4f7-94af3aad53cc" in namespace "containers-3300" to be "success or failure"
Feb 18 00:21:03.575: INFO: Pod "client-containers-19d07486-3cf5-46ae-b4f7-94af3aad53cc": Phase="Pending", Reason="", readiness=false. Elapsed: 5.462519ms
Feb 18 00:21:05.720: INFO: Pod "client-containers-19d07486-3cf5-46ae-b4f7-94af3aad53cc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.150430451s
Feb 18 00:21:07.724: INFO: Pod "client-containers-19d07486-3cf5-46ae-b4f7-94af3aad53cc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.153988488s
STEP: Saw pod success
Feb 18 00:21:07.724: INFO: Pod "client-containers-19d07486-3cf5-46ae-b4f7-94af3aad53cc" satisfied condition "success or failure"
Feb 18 00:21:07.727: INFO: Trying to get logs from node iruya-worker pod client-containers-19d07486-3cf5-46ae-b4f7-94af3aad53cc container test-container: 
STEP: delete the pod
Feb 18 00:21:07.794: INFO: Waiting for pod client-containers-19d07486-3cf5-46ae-b4f7-94af3aad53cc to disappear
Feb 18 00:21:07.804: INFO: Pod client-containers-19d07486-3cf5-46ae-b4f7-94af3aad53cc no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:21:07.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-3300" for this suite.
Feb 18 00:21:13.818: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:21:13.937: INFO: namespace containers-3300 deletion completed in 6.126775646s

• [SLOW TEST:10.434 seconds]
[k8s.io] Docker Containers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:21:13.937: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should scale a replication controller  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Feb 18 00:21:14.022: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-449'
Feb 18 00:21:14.295: INFO: stderr: ""
Feb 18 00:21:14.295: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 18 00:21:14.295: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-449'
Feb 18 00:21:14.399: INFO: stderr: ""
Feb 18 00:21:14.399: INFO: stdout: "update-demo-nautilus-9plgx update-demo-nautilus-kqqzh "
Feb 18 00:21:14.399: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9plgx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-449'
Feb 18 00:21:14.503: INFO: stderr: ""
Feb 18 00:21:14.503: INFO: stdout: ""
Feb 18 00:21:14.503: INFO: update-demo-nautilus-9plgx is created but not running
Feb 18 00:21:19.503: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-449'
Feb 18 00:21:19.592: INFO: stderr: ""
Feb 18 00:21:19.592: INFO: stdout: "update-demo-nautilus-9plgx update-demo-nautilus-kqqzh "
Feb 18 00:21:19.592: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9plgx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-449'
Feb 18 00:21:19.681: INFO: stderr: ""
Feb 18 00:21:19.681: INFO: stdout: "true"
Feb 18 00:21:19.681: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9plgx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-449'
Feb 18 00:21:19.768: INFO: stderr: ""
Feb 18 00:21:19.768: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 18 00:21:19.768: INFO: validating pod update-demo-nautilus-9plgx
Feb 18 00:21:19.772: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 18 00:21:19.772: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 18 00:21:19.772: INFO: update-demo-nautilus-9plgx is verified up and running
Feb 18 00:21:19.772: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kqqzh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-449'
Feb 18 00:21:19.870: INFO: stderr: ""
Feb 18 00:21:19.870: INFO: stdout: "true"
Feb 18 00:21:19.870: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kqqzh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-449'
Feb 18 00:21:19.954: INFO: stderr: ""
Feb 18 00:21:19.954: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 18 00:21:19.954: INFO: validating pod update-demo-nautilus-kqqzh
Feb 18 00:21:19.961: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 18 00:21:19.961: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 18 00:21:19.961: INFO: update-demo-nautilus-kqqzh is verified up and running
STEP: scaling down the replication controller
Feb 18 00:21:19.963: INFO: scanned /root for discovery docs: 
Feb 18 00:21:19.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-449'
Feb 18 00:21:21.092: INFO: stderr: ""
Feb 18 00:21:21.092: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 18 00:21:21.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-449'
Feb 18 00:21:21.200: INFO: stderr: ""
Feb 18 00:21:21.200: INFO: stdout: "update-demo-nautilus-9plgx update-demo-nautilus-kqqzh "
STEP: Replicas for name=update-demo: expected=1 actual=2
Feb 18 00:21:26.200: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-449'
Feb 18 00:21:26.295: INFO: stderr: ""
Feb 18 00:21:26.295: INFO: stdout: "update-demo-nautilus-9plgx update-demo-nautilus-kqqzh "
STEP: Replicas for name=update-demo: expected=1 actual=2
Feb 18 00:21:31.295: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-449'
Feb 18 00:21:31.387: INFO: stderr: ""
Feb 18 00:21:31.387: INFO: stdout: "update-demo-nautilus-9plgx "
Feb 18 00:21:31.387: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9plgx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-449'
Feb 18 00:21:31.496: INFO: stderr: ""
Feb 18 00:21:31.496: INFO: stdout: "true"
Feb 18 00:21:31.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9plgx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-449'
Feb 18 00:21:31.593: INFO: stderr: ""
Feb 18 00:21:31.593: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 18 00:21:31.593: INFO: validating pod update-demo-nautilus-9plgx
Feb 18 00:21:31.596: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 18 00:21:31.596: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 18 00:21:31.596: INFO: update-demo-nautilus-9plgx is verified up and running
STEP: scaling up the replication controller
Feb 18 00:21:31.599: INFO: scanned /root for discovery docs: 
Feb 18 00:21:31.599: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-449'
Feb 18 00:21:32.731: INFO: stderr: ""
Feb 18 00:21:32.731: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 18 00:21:32.731: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-449'
Feb 18 00:21:32.826: INFO: stderr: ""
Feb 18 00:21:32.826: INFO: stdout: "update-demo-nautilus-9plgx update-demo-nautilus-zxzcv "
Feb 18 00:21:32.826: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9plgx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-449'
Feb 18 00:21:32.924: INFO: stderr: ""
Feb 18 00:21:32.924: INFO: stdout: "true"
Feb 18 00:21:32.924: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9plgx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-449'
Feb 18 00:21:33.048: INFO: stderr: ""
Feb 18 00:21:33.048: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 18 00:21:33.048: INFO: validating pod update-demo-nautilus-9plgx
Feb 18 00:21:33.053: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 18 00:21:33.053: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 18 00:21:33.053: INFO: update-demo-nautilus-9plgx is verified up and running
Feb 18 00:21:33.053: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zxzcv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-449'
Feb 18 00:21:33.142: INFO: stderr: ""
Feb 18 00:21:33.142: INFO: stdout: ""
Feb 18 00:21:33.142: INFO: update-demo-nautilus-zxzcv is created but not running
Feb 18 00:21:38.142: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-449'
Feb 18 00:21:38.247: INFO: stderr: ""
Feb 18 00:21:38.247: INFO: stdout: "update-demo-nautilus-9plgx update-demo-nautilus-zxzcv "
Feb 18 00:21:38.247: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9plgx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-449'
Feb 18 00:21:38.337: INFO: stderr: ""
Feb 18 00:21:38.337: INFO: stdout: "true"
Feb 18 00:21:38.338: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9plgx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-449'
Feb 18 00:21:38.425: INFO: stderr: ""
Feb 18 00:21:38.425: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 18 00:21:38.425: INFO: validating pod update-demo-nautilus-9plgx
Feb 18 00:21:38.428: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 18 00:21:38.428: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 18 00:21:38.428: INFO: update-demo-nautilus-9plgx is verified up and running
Feb 18 00:21:38.428: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zxzcv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-449'
Feb 18 00:21:38.530: INFO: stderr: ""
Feb 18 00:21:38.530: INFO: stdout: "true"
Feb 18 00:21:38.530: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zxzcv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-449'
Feb 18 00:21:38.615: INFO: stderr: ""
Feb 18 00:21:38.615: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 18 00:21:38.615: INFO: validating pod update-demo-nautilus-zxzcv
Feb 18 00:21:38.619: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 18 00:21:38.619: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 18 00:21:38.619: INFO: update-demo-nautilus-zxzcv is verified up and running
STEP: using delete to clean up resources
Feb 18 00:21:38.619: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-449'
Feb 18 00:21:38.725: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 18 00:21:38.725: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Feb 18 00:21:38.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-449'
Feb 18 00:21:38.820: INFO: stderr: "No resources found.\n"
Feb 18 00:21:38.820: INFO: stdout: ""
Feb 18 00:21:38.820: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-449 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb 18 00:21:38.907: INFO: stderr: ""
Feb 18 00:21:38.907: INFO: stdout: "update-demo-nautilus-9plgx\nupdate-demo-nautilus-zxzcv\n"
Feb 18 00:21:39.407: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-449'
Feb 18 00:21:39.534: INFO: stderr: "No resources found.\n"
Feb 18 00:21:39.534: INFO: stdout: ""
Feb 18 00:21:39.534: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-449 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb 18 00:21:39.624: INFO: stderr: ""
Feb 18 00:21:39.624: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:21:39.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-449" for this suite.
Feb 18 00:22:01.679: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:22:01.782: INFO: namespace kubectl-449 deletion completed in 22.154319914s

• [SLOW TEST:47.845 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should scale a replication controller  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:22:01.782: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should do a rolling update of a replication controller  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the initial replication controller
Feb 18 00:22:01.989: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3105'
Feb 18 00:22:02.757: INFO: stderr: ""
Feb 18 00:22:02.757: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 18 00:22:02.757: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3105'
Feb 18 00:22:03.019: INFO: stderr: ""
Feb 18 00:22:03.019: INFO: stdout: ""
STEP: Replicas for name=update-demo: expected=2 actual=0
Feb 18 00:22:08.019: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3105'
Feb 18 00:22:08.120: INFO: stderr: ""
Feb 18 00:22:08.120: INFO: stdout: "update-demo-nautilus-hvb82 update-demo-nautilus-vts99 "
Feb 18 00:22:08.120: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hvb82 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3105'
Feb 18 00:22:08.218: INFO: stderr: ""
Feb 18 00:22:08.218: INFO: stdout: "true"
Feb 18 00:22:08.218: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hvb82 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3105'
Feb 18 00:22:08.322: INFO: stderr: ""
Feb 18 00:22:08.322: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 18 00:22:08.322: INFO: validating pod update-demo-nautilus-hvb82
Feb 18 00:22:08.327: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 18 00:22:08.327: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 18 00:22:08.327: INFO: update-demo-nautilus-hvb82 is verified up and running
Feb 18 00:22:08.327: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vts99 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3105'
Feb 18 00:22:08.418: INFO: stderr: ""
Feb 18 00:22:08.418: INFO: stdout: "true"
Feb 18 00:22:08.418: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vts99 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3105'
Feb 18 00:22:08.508: INFO: stderr: ""
Feb 18 00:22:08.508: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 18 00:22:08.508: INFO: validating pod update-demo-nautilus-vts99
Feb 18 00:22:08.512: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 18 00:22:08.512: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 18 00:22:08.512: INFO: update-demo-nautilus-vts99 is verified up and running
STEP: rolling-update to new replication controller
Feb 18 00:22:08.515: INFO: scanned /root for discovery docs: 
Feb 18 00:22:08.515: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-3105'
Feb 18 00:22:31.396: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Feb 18 00:22:31.396: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 18 00:22:31.396: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3105'
Feb 18 00:22:31.483: INFO: stderr: ""
Feb 18 00:22:31.483: INFO: stdout: "update-demo-kitten-654vl update-demo-kitten-m2lf8 "
Feb 18 00:22:31.483: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-654vl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3105'
Feb 18 00:22:31.567: INFO: stderr: ""
Feb 18 00:22:31.568: INFO: stdout: "true"
Feb 18 00:22:31.568: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-654vl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3105'
Feb 18 00:22:31.655: INFO: stderr: ""
Feb 18 00:22:31.655: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Feb 18 00:22:31.655: INFO: validating pod update-demo-kitten-654vl
Feb 18 00:22:31.665: INFO: got data: {
  "image": "kitten.jpg"
}

Feb 18 00:22:31.665: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Feb 18 00:22:31.665: INFO: update-demo-kitten-654vl is verified up and running
Feb 18 00:22:31.665: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-m2lf8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3105'
Feb 18 00:22:31.769: INFO: stderr: ""
Feb 18 00:22:31.769: INFO: stdout: "true"
Feb 18 00:22:31.769: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-m2lf8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3105'
Feb 18 00:22:31.862: INFO: stderr: ""
Feb 18 00:22:31.862: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Feb 18 00:22:31.862: INFO: validating pod update-demo-kitten-m2lf8
Feb 18 00:22:31.866: INFO: got data: {
  "image": "kitten.jpg"
}

Feb 18 00:22:31.866: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Feb 18 00:22:31.866: INFO: update-demo-kitten-m2lf8 is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:22:31.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3105" for this suite.
Feb 18 00:22:55.888: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:22:55.995: INFO: namespace kubectl-3105 deletion completed in 24.125716819s

• [SLOW TEST:54.213 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should do a rolling update of a replication controller  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:22:55.995: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check is all data is printed  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 18 00:22:56.061: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Feb 18 00:22:56.194: INFO: stderr: ""
Feb 18 00:22:56.194: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.12\", GitCommit:\"e2a822d9f3c2fdb5c9bfbe64313cf9f657f0a725\", GitTreeState:\"clean\", BuildDate:\"2020-05-06T05:17:59Z\", GoVersion:\"go1.12.17\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.12\", GitCommit:\"e2a822d9f3c2fdb5c9bfbe64313cf9f657f0a725\", GitTreeState:\"clean\", BuildDate:\"2020-09-14T08:06:34Z\", GoVersion:\"go1.12.17\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:22:56.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8609" for this suite.
Feb 18 00:23:02.217: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:23:02.305: INFO: namespace kubectl-8609 deletion completed in 6.10652858s

• [SLOW TEST:6.310 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl version
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check is all data is printed  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:23:02.306: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartNever pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Feb 18 00:23:02.384: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:23:09.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-1902" for this suite.
Feb 18 00:23:15.654: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:23:15.865: INFO: namespace init-container-1902 deletion completed in 6.233049744s

• [SLOW TEST:13.560 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartNever pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:23:15.865: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:23:21.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-435" for this suite.
Feb 18 00:23:27.475: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:23:27.558: INFO: namespace watch-435 deletion completed in 6.185792626s

• [SLOW TEST:11.693 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:23:27.558: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-7573
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Feb 18 00:23:27.681: INFO: Found 0 stateful pods, waiting for 3
Feb 18 00:23:37.697: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 18 00:23:37.697: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 18 00:23:37.697: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb 18 00:23:47.687: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 18 00:23:47.687: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 18 00:23:47.687: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Feb 18 00:23:47.714: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Feb 18 00:23:57.757: INFO: Updating stateful set ss2
Feb 18 00:23:57.774: INFO: Waiting for Pod statefulset-7573/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Feb 18 00:24:07.991: INFO: Found 2 stateful pods, waiting for 3
Feb 18 00:24:17.996: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 18 00:24:17.996: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 18 00:24:17.996: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Feb 18 00:24:18.040: INFO: Updating stateful set ss2
Feb 18 00:24:18.061: INFO: Waiting for Pod statefulset-7573/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 18 00:24:28.085: INFO: Updating stateful set ss2
Feb 18 00:24:28.090: INFO: Waiting for StatefulSet statefulset-7573/ss2 to complete update
Feb 18 00:24:28.090: INFO: Waiting for Pod statefulset-7573/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 18 00:24:38.097: INFO: Waiting for StatefulSet statefulset-7573/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Feb 18 00:24:48.098: INFO: Deleting all statefulset in ns statefulset-7573
Feb 18 00:24:48.101: INFO: Scaling statefulset ss2 to 0
Feb 18 00:25:08.134: INFO: Waiting for statefulset status.replicas updated to 0
Feb 18 00:25:08.137: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:25:08.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-7573" for this suite.
Feb 18 00:25:14.188: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:25:14.314: INFO: namespace statefulset-7573 deletion completed in 6.149611371s

• [SLOW TEST:106.755 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:25:14.314: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-f072791b-8b2d-4527-97c7-0bee9454da13
STEP: Creating a pod to test consume configMaps
Feb 18 00:25:14.395: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b2da0fe7-d8bf-4a02-9a08-029d781ee7b6" in namespace "projected-6679" to be "success or failure"
Feb 18 00:25:14.458: INFO: Pod "pod-projected-configmaps-b2da0fe7-d8bf-4a02-9a08-029d781ee7b6": Phase="Pending", Reason="", readiness=false. Elapsed: 63.113024ms
Feb 18 00:25:16.572: INFO: Pod "pod-projected-configmaps-b2da0fe7-d8bf-4a02-9a08-029d781ee7b6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.176629793s
Feb 18 00:25:18.576: INFO: Pod "pod-projected-configmaps-b2da0fe7-d8bf-4a02-9a08-029d781ee7b6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.180491798s
STEP: Saw pod success
Feb 18 00:25:18.576: INFO: Pod "pod-projected-configmaps-b2da0fe7-d8bf-4a02-9a08-029d781ee7b6" satisfied condition "success or failure"
Feb 18 00:25:18.579: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-b2da0fe7-d8bf-4a02-9a08-029d781ee7b6 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 18 00:25:18.642: INFO: Waiting for pod pod-projected-configmaps-b2da0fe7-d8bf-4a02-9a08-029d781ee7b6 to disappear
Feb 18 00:25:18.653: INFO: Pod pod-projected-configmaps-b2da0fe7-d8bf-4a02-9a08-029d781ee7b6 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:25:18.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6679" for this suite.
Feb 18 00:25:24.668: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:25:24.753: INFO: namespace projected-6679 deletion completed in 6.095139691s

• [SLOW TEST:10.439 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:25:24.753: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Feb 18 00:25:24.789: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:25:31.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-7001" for this suite.
Feb 18 00:25:37.191: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:25:37.287: INFO: namespace init-container-7001 deletion completed in 6.123321069s

• [SLOW TEST:12.534 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:25:37.288: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl label
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210
STEP: creating the pod
Feb 18 00:25:37.337: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7283'
Feb 18 00:25:37.614: INFO: stderr: ""
Feb 18 00:25:37.614: INFO: stdout: "pod/pause created\n"
Feb 18 00:25:37.614: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Feb 18 00:25:37.614: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-7283" to be "running and ready"
Feb 18 00:25:37.633: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 18.873731ms
Feb 18 00:25:39.637: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023221241s
Feb 18 00:25:41.642: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.027580091s
Feb 18 00:25:41.642: INFO: Pod "pause" satisfied condition "running and ready"
Feb 18 00:25:41.642: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: adding the label testing-label with value testing-label-value to a pod
Feb 18 00:25:41.642: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-7283'
Feb 18 00:25:41.745: INFO: stderr: ""
Feb 18 00:25:41.745: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Feb 18 00:25:41.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-7283'
Feb 18 00:25:41.832: INFO: stderr: ""
Feb 18 00:25:41.832: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          4s    testing-label-value\n"
STEP: removing the label testing-label of a pod
Feb 18 00:25:41.832: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-7283'
Feb 18 00:25:41.932: INFO: stderr: ""
Feb 18 00:25:41.932: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Feb 18 00:25:41.932: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-7283'
Feb 18 00:25:42.031: INFO: stderr: ""
Feb 18 00:25:42.031: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          5s    \n"
[AfterEach] [k8s.io] Kubectl label
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217
STEP: using delete to clean up resources
Feb 18 00:25:42.031: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7283'
Feb 18 00:25:42.158: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 18 00:25:42.158: INFO: stdout: "pod \"pause\" force deleted\n"
Feb 18 00:25:42.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-7283'
Feb 18 00:25:42.249: INFO: stderr: "No resources found.\n"
Feb 18 00:25:42.249: INFO: stdout: ""
Feb 18 00:25:42.249: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-7283 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb 18 00:25:42.340: INFO: stderr: ""
Feb 18 00:25:42.340: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:25:42.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7283" for this suite.
Feb 18 00:25:48.422: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:25:48.511: INFO: namespace kubectl-7283 deletion completed in 6.167893573s

• [SLOW TEST:11.223 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl label
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update the label on a resource  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:25:48.511: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 18 00:25:48.808: INFO: Waiting up to 5m0s for pod "downwardapi-volume-92a1741f-f8ad-4a32-ab56-1a2387c7253f" in namespace "projected-6677" to be "success or failure"
Feb 18 00:25:48.813: INFO: Pod "downwardapi-volume-92a1741f-f8ad-4a32-ab56-1a2387c7253f": Phase="Pending", Reason="", readiness=false. Elapsed: 5.151963ms
Feb 18 00:25:50.817: INFO: Pod "downwardapi-volume-92a1741f-f8ad-4a32-ab56-1a2387c7253f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009460858s
Feb 18 00:25:52.821: INFO: Pod "downwardapi-volume-92a1741f-f8ad-4a32-ab56-1a2387c7253f": Phase="Running", Reason="", readiness=true. Elapsed: 4.013126818s
Feb 18 00:25:54.825: INFO: Pod "downwardapi-volume-92a1741f-f8ad-4a32-ab56-1a2387c7253f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.017175659s
STEP: Saw pod success
Feb 18 00:25:54.825: INFO: Pod "downwardapi-volume-92a1741f-f8ad-4a32-ab56-1a2387c7253f" satisfied condition "success or failure"
Feb 18 00:25:54.828: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-92a1741f-f8ad-4a32-ab56-1a2387c7253f container client-container: 
STEP: delete the pod
Feb 18 00:25:54.879: INFO: Waiting for pod downwardapi-volume-92a1741f-f8ad-4a32-ab56-1a2387c7253f to disappear
Feb 18 00:25:54.897: INFO: Pod downwardapi-volume-92a1741f-f8ad-4a32-ab56-1a2387c7253f no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:25:54.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6677" for this suite.
Feb 18 00:26:00.912: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:26:01.029: INFO: namespace projected-6677 deletion completed in 6.12830919s

• [SLOW TEST:12.518 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected combined
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:26:01.030: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-projected-all-test-volume-5a68a708-9866-4175-bec3-62401c6621d6
STEP: Creating secret with name secret-projected-all-test-volume-51eab175-3aed-4327-8bbf-db020c36fc07
STEP: Creating a pod to test Check all projections for projected volume plugin
Feb 18 00:26:01.100: INFO: Waiting up to 5m0s for pod "projected-volume-0cbb3ed5-b524-46c4-aeb4-740afd302eef" in namespace "projected-5893" to be "success or failure"
Feb 18 00:26:01.104: INFO: Pod "projected-volume-0cbb3ed5-b524-46c4-aeb4-740afd302eef": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01976ms
Feb 18 00:26:03.108: INFO: Pod "projected-volume-0cbb3ed5-b524-46c4-aeb4-740afd302eef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00819835s
Feb 18 00:26:05.112: INFO: Pod "projected-volume-0cbb3ed5-b524-46c4-aeb4-740afd302eef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012040396s
STEP: Saw pod success
Feb 18 00:26:05.112: INFO: Pod "projected-volume-0cbb3ed5-b524-46c4-aeb4-740afd302eef" satisfied condition "success or failure"
Feb 18 00:26:05.115: INFO: Trying to get logs from node iruya-worker2 pod projected-volume-0cbb3ed5-b524-46c4-aeb4-740afd302eef container projected-all-volume-test: 
STEP: delete the pod
Feb 18 00:26:05.142: INFO: Waiting for pod projected-volume-0cbb3ed5-b524-46c4-aeb4-740afd302eef to disappear
Feb 18 00:26:05.145: INFO: Pod projected-volume-0cbb3ed5-b524-46c4-aeb4-740afd302eef no longer exists
[AfterEach] [sig-storage] Projected combined
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:26:05.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5893" for this suite.
Feb 18 00:26:11.176: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:26:11.263: INFO: namespace projected-5893 deletion completed in 6.114443959s

• [SLOW TEST:10.233 seconds]
[sig-storage] Projected combined
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:26:11.264: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Feb 18 00:26:18.110: INFO: 10 pods remaining
Feb 18 00:26:18.110: INFO: 6 pods has nil DeletionTimestamp
Feb 18 00:26:18.110: INFO: 
Feb 18 00:26:19.819: INFO: 0 pods remaining
Feb 18 00:26:19.819: INFO: 0 pods has nil DeletionTimestamp
Feb 18 00:26:19.819: INFO: 
Feb 18 00:26:21.164: INFO: 0 pods remaining
Feb 18 00:26:21.164: INFO: 0 pods has nil DeletionTimestamp
Feb 18 00:26:21.164: INFO: 
STEP: Gathering metrics
W0218 00:26:22.153734       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 18 00:26:22.153: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:26:22.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-5200" for this suite.
Feb 18 00:26:28.764: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:26:29.007: INFO: namespace gc-5200 deletion completed in 6.648744918s

• [SLOW TEST:17.743 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:26:29.007: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-configmap-6wzl
STEP: Creating a pod to test atomic-volume-subpath
Feb 18 00:26:29.238: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-6wzl" in namespace "subpath-1762" to be "success or failure"
Feb 18 00:26:29.262: INFO: Pod "pod-subpath-test-configmap-6wzl": Phase="Pending", Reason="", readiness=false. Elapsed: 23.937295ms
Feb 18 00:26:31.266: INFO: Pod "pod-subpath-test-configmap-6wzl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028365414s
Feb 18 00:26:33.276: INFO: Pod "pod-subpath-test-configmap-6wzl": Phase="Running", Reason="", readiness=true. Elapsed: 4.037635799s
Feb 18 00:26:35.283: INFO: Pod "pod-subpath-test-configmap-6wzl": Phase="Running", Reason="", readiness=true. Elapsed: 6.044626549s
Feb 18 00:26:37.287: INFO: Pod "pod-subpath-test-configmap-6wzl": Phase="Running", Reason="", readiness=true. Elapsed: 8.049140679s
Feb 18 00:26:39.291: INFO: Pod "pod-subpath-test-configmap-6wzl": Phase="Running", Reason="", readiness=true. Elapsed: 10.053334624s
Feb 18 00:26:41.295: INFO: Pod "pod-subpath-test-configmap-6wzl": Phase="Running", Reason="", readiness=true. Elapsed: 12.05733741s
Feb 18 00:26:43.299: INFO: Pod "pod-subpath-test-configmap-6wzl": Phase="Running", Reason="", readiness=true. Elapsed: 14.061217449s
Feb 18 00:26:45.303: INFO: Pod "pod-subpath-test-configmap-6wzl": Phase="Running", Reason="", readiness=true. Elapsed: 16.065615737s
Feb 18 00:26:47.307: INFO: Pod "pod-subpath-test-configmap-6wzl": Phase="Running", Reason="", readiness=true. Elapsed: 18.069529498s
Feb 18 00:26:49.312: INFO: Pod "pod-subpath-test-configmap-6wzl": Phase="Running", Reason="", readiness=true. Elapsed: 20.073736985s
Feb 18 00:26:51.316: INFO: Pod "pod-subpath-test-configmap-6wzl": Phase="Running", Reason="", readiness=true. Elapsed: 22.078201498s
Feb 18 00:26:53.320: INFO: Pod "pod-subpath-test-configmap-6wzl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.082067756s
STEP: Saw pod success
Feb 18 00:26:53.320: INFO: Pod "pod-subpath-test-configmap-6wzl" satisfied condition "success or failure"
Feb 18 00:26:53.322: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-configmap-6wzl container test-container-subpath-configmap-6wzl: 
STEP: delete the pod
Feb 18 00:26:53.344: INFO: Waiting for pod pod-subpath-test-configmap-6wzl to disappear
Feb 18 00:26:53.348: INFO: Pod pod-subpath-test-configmap-6wzl no longer exists
STEP: Deleting pod pod-subpath-test-configmap-6wzl
Feb 18 00:26:53.348: INFO: Deleting pod "pod-subpath-test-configmap-6wzl" in namespace "subpath-1762"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:26:53.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-1762" for this suite.
Feb 18 00:26:59.403: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:26:59.486: INFO: namespace subpath-1762 deletion completed in 6.130380948s

• [SLOW TEST:30.479 seconds]
[sig-storage] Subpath
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:26:59.487: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Feb 18 00:26:59.608: INFO: Waiting up to 5m0s for pod "pod-eac09e06-f947-415e-8e8f-aae7cd2cc7e6" in namespace "emptydir-2074" to be "success or failure"
Feb 18 00:26:59.630: INFO: Pod "pod-eac09e06-f947-415e-8e8f-aae7cd2cc7e6": Phase="Pending", Reason="", readiness=false. Elapsed: 21.276152ms
Feb 18 00:27:01.634: INFO: Pod "pod-eac09e06-f947-415e-8e8f-aae7cd2cc7e6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02577088s
Feb 18 00:27:03.639: INFO: Pod "pod-eac09e06-f947-415e-8e8f-aae7cd2cc7e6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030082384s
STEP: Saw pod success
Feb 18 00:27:03.639: INFO: Pod "pod-eac09e06-f947-415e-8e8f-aae7cd2cc7e6" satisfied condition "success or failure"
Feb 18 00:27:03.642: INFO: Trying to get logs from node iruya-worker2 pod pod-eac09e06-f947-415e-8e8f-aae7cd2cc7e6 container test-container: 
STEP: delete the pod
Feb 18 00:27:03.679: INFO: Waiting for pod pod-eac09e06-f947-415e-8e8f-aae7cd2cc7e6 to disappear
Feb 18 00:27:03.729: INFO: Pod pod-eac09e06-f947-415e-8e8f-aae7cd2cc7e6 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:27:03.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2074" for this suite.
Feb 18 00:27:09.750: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:27:09.868: INFO: namespace emptydir-2074 deletion completed in 6.133674749s

• [SLOW TEST:10.380 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:27:09.868: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
Feb 18 00:27:09.908: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Registering the sample API server.
Feb 18 00:27:10.591: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Feb 18 00:27:13.208: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63749204830, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63749204830, loc:(*time.Location)(0x7edea20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63749204830, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63749204830, loc:(*time.Location)(0x7edea20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 00:27:15.211: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63749204830, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63749204830, loc:(*time.Location)(0x7edea20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63749204830, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63749204830, loc:(*time.Location)(0x7edea20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 00:27:17.842: INFO: Waited 623.479631ms for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:27:18.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-5186" for this suite.
Feb 18 00:27:24.642: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:27:24.734: INFO: namespace aggregator-5186 deletion completed in 6.365339967s

• [SLOW TEST:14.866 seconds]
[sig-api-machinery] Aggregator
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:27:24.736: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 18 00:27:24.821: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fc8383d8-3f2b-423f-b18b-3a775dc46d76" in namespace "downward-api-2380" to be "success or failure"
Feb 18 00:27:24.829: INFO: Pod "downwardapi-volume-fc8383d8-3f2b-423f-b18b-3a775dc46d76": Phase="Pending", Reason="", readiness=false. Elapsed: 7.91411ms
Feb 18 00:27:26.833: INFO: Pod "downwardapi-volume-fc8383d8-3f2b-423f-b18b-3a775dc46d76": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011887478s
Feb 18 00:27:28.837: INFO: Pod "downwardapi-volume-fc8383d8-3f2b-423f-b18b-3a775dc46d76": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016016508s
STEP: Saw pod success
Feb 18 00:27:28.837: INFO: Pod "downwardapi-volume-fc8383d8-3f2b-423f-b18b-3a775dc46d76" satisfied condition "success or failure"
Feb 18 00:27:28.840: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-fc8383d8-3f2b-423f-b18b-3a775dc46d76 container client-container: 
STEP: delete the pod
Feb 18 00:27:28.862: INFO: Waiting for pod downwardapi-volume-fc8383d8-3f2b-423f-b18b-3a775dc46d76 to disappear
Feb 18 00:27:28.866: INFO: Pod downwardapi-volume-fc8383d8-3f2b-423f-b18b-3a775dc46d76 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:27:28.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2380" for this suite.
Feb 18 00:27:34.882: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:27:34.972: INFO: namespace downward-api-2380 deletion completed in 6.10245155s

• [SLOW TEST:10.236 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:27:34.972: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-054efdef-cffd-4c76-9198-69dce458dc3e
STEP: Creating a pod to test consume configMaps
Feb 18 00:27:35.043: INFO: Waiting up to 5m0s for pod "pod-configmaps-9f7c6913-5642-4059-928f-0c27d22f228e" in namespace "configmap-4459" to be "success or failure"
Feb 18 00:27:35.059: INFO: Pod "pod-configmaps-9f7c6913-5642-4059-928f-0c27d22f228e": Phase="Pending", Reason="", readiness=false. Elapsed: 16.02796ms
Feb 18 00:27:37.063: INFO: Pod "pod-configmaps-9f7c6913-5642-4059-928f-0c27d22f228e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019709125s
Feb 18 00:27:39.067: INFO: Pod "pod-configmaps-9f7c6913-5642-4059-928f-0c27d22f228e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024177152s
STEP: Saw pod success
Feb 18 00:27:39.067: INFO: Pod "pod-configmaps-9f7c6913-5642-4059-928f-0c27d22f228e" satisfied condition "success or failure"
Feb 18 00:27:39.070: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-9f7c6913-5642-4059-928f-0c27d22f228e container configmap-volume-test: 
STEP: delete the pod
Feb 18 00:27:39.096: INFO: Waiting for pod pod-configmaps-9f7c6913-5642-4059-928f-0c27d22f228e to disappear
Feb 18 00:27:39.100: INFO: Pod pod-configmaps-9f7c6913-5642-4059-928f-0c27d22f228e no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:27:39.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4459" for this suite.
Feb 18 00:27:45.116: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:27:45.299: INFO: namespace configmap-4459 deletion completed in 6.195465909s

• [SLOW TEST:10.327 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:27:45.299: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0218 00:27:57.909127       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 18 00:27:57.909: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:27:57.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-6473" for this suite.
Feb 18 00:28:07.929: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:28:08.020: INFO: namespace gc-6473 deletion completed in 10.106407354s

• [SLOW TEST:22.721 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:28:08.021: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-pl8hm in namespace proxy-8623
I0218 00:28:08.378502       6 runners.go:180] Created replication controller with name: proxy-service-pl8hm, namespace: proxy-8623, replica count: 1
I0218 00:28:09.429015       6 runners.go:180] proxy-service-pl8hm Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0218 00:28:10.429218       6 runners.go:180] proxy-service-pl8hm Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0218 00:28:11.429397       6 runners.go:180] proxy-service-pl8hm Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0218 00:28:12.429639       6 runners.go:180] proxy-service-pl8hm Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0218 00:28:13.429908       6 runners.go:180] proxy-service-pl8hm Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0218 00:28:14.430178       6 runners.go:180] proxy-service-pl8hm Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0218 00:28:15.430379       6 runners.go:180] proxy-service-pl8hm Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0218 00:28:16.430672       6 runners.go:180] proxy-service-pl8hm Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb 18 00:28:16.434: INFO: setup took 8.370079284s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Feb 18 00:28:16.442: INFO: (0) /api/v1/namespaces/proxy-8623/pods/proxy-service-pl8hm-qbcxm:162/proxy/: bar (200; 7.419245ms)
Feb 18 00:28:16.442: INFO: (0) /api/v1/namespaces/proxy-8623/pods/proxy-service-pl8hm-qbcxm:160/proxy/: foo (200; 7.993074ms)
Feb 18 00:28:16.442: INFO: (0) /api/v1/namespaces/proxy-8623/services/proxy-service-pl8hm:portname1/proxy/: foo (200; 8.178769ms)
Feb 18 00:28:16.442: INFO: (0) /api/v1/namespaces/proxy-8623/pods/http:proxy-service-pl8hm-qbcxm:160/proxy/: foo (200; 8.286101ms)
Feb 18 00:28:16.443: INFO: (0) /api/v1/namespaces/proxy-8623/services/http:proxy-service-pl8hm:portname1/proxy/: foo (200; 8.498406ms)
Feb 18 00:28:16.443: INFO: (0) /api/v1/namespaces/proxy-8623/pods/proxy-service-pl8hm-qbcxm:1080/proxy/: test<... (200; 8.860544ms)
Feb 18 00:28:16.443: INFO: (0) /api/v1/namespaces/proxy-8623/pods/http:proxy-service-pl8hm-qbcxm:1080/proxy/: ... (200; 9.20828ms)
Feb 18 00:28:16.443: INFO: (0) /api/v1/namespaces/proxy-8623/pods/proxy-service-pl8hm-qbcxm/proxy/: test (200; 9.238328ms)
Feb 18 00:28:16.445: INFO: (0) /api/v1/namespaces/proxy-8623/services/proxy-service-pl8hm:portname2/proxy/: bar (200; 11.376611ms)
Feb 18 00:28:16.445: INFO: (0) /api/v1/namespaces/proxy-8623/pods/http:proxy-service-pl8hm-qbcxm:162/proxy/: bar (200; 11.506107ms)
Feb 18 00:28:16.447: INFO: (0) /api/v1/namespaces/proxy-8623/services/https:proxy-service-pl8hm:tlsportname2/proxy/: tls qux (200; 13.077655ms)
Feb 18 00:28:16.448: INFO: (0) /api/v1/namespaces/proxy-8623/services/http:proxy-service-pl8hm:portname2/proxy/: bar (200; 13.524597ms)
Feb 18 00:28:16.450: INFO: (0) /api/v1/namespaces/proxy-8623/pods/https:proxy-service-pl8hm-qbcxm:443/proxy/: ... (200; 3.696819ms)
Feb 18 00:28:16.456: INFO: (1) /api/v1/namespaces/proxy-8623/pods/proxy-service-pl8hm-qbcxm:162/proxy/: bar (200; 3.638315ms)
Feb 18 00:28:16.456: INFO: (1) /api/v1/namespaces/proxy-8623/pods/proxy-service-pl8hm-qbcxm/proxy/: test (200; 3.82465ms)
Feb 18 00:28:16.457: INFO: (1) /api/v1/namespaces/proxy-8623/pods/proxy-service-pl8hm-qbcxm:1080/proxy/: test<... (200; 4.88099ms)
Feb 18 00:28:16.457: INFO: (1) /api/v1/namespaces/proxy-8623/pods/https:proxy-service-pl8hm-qbcxm:443/proxy/: ... (200; 3.317106ms)
Feb 18 00:28:16.463: INFO: (2) /api/v1/namespaces/proxy-8623/pods/https:proxy-service-pl8hm-qbcxm:460/proxy/: tls baz (200; 5.093902ms)
Feb 18 00:28:16.463: INFO: (2) /api/v1/namespaces/proxy-8623/pods/proxy-service-pl8hm-qbcxm:160/proxy/: foo (200; 5.129003ms)
Feb 18 00:28:16.463: INFO: (2) /api/v1/namespaces/proxy-8623/services/https:proxy-service-pl8hm:tlsportname1/proxy/: tls baz (200; 5.208723ms)
Feb 18 00:28:16.463: INFO: (2) /api/v1/namespaces/proxy-8623/services/proxy-service-pl8hm:portname1/proxy/: foo (200; 5.19427ms)
Feb 18 00:28:16.463: INFO: (2) /api/v1/namespaces/proxy-8623/pods/http:proxy-service-pl8hm-qbcxm:162/proxy/: bar (200; 5.289037ms)
Feb 18 00:28:16.464: INFO: (2) /api/v1/namespaces/proxy-8623/pods/https:proxy-service-pl8hm-qbcxm:462/proxy/: tls qux (200; 5.489102ms)
Feb 18 00:28:16.464: INFO: (2) /api/v1/namespaces/proxy-8623/services/proxy-service-pl8hm:portname2/proxy/: bar (200; 5.792746ms)
Feb 18 00:28:16.464: INFO: (2) /api/v1/namespaces/proxy-8623/pods/proxy-service-pl8hm-qbcxm:162/proxy/: bar (200; 5.851444ms)
Feb 18 00:28:16.464: INFO: (2) /api/v1/namespaces/proxy-8623/services/http:proxy-service-pl8hm:portname1/proxy/: foo (200; 5.882788ms)
Feb 18 00:28:16.465: INFO: (2) /api/v1/namespaces/proxy-8623/pods/proxy-service-pl8hm-qbcxm:1080/proxy/: test<... (200; 6.609827ms)
Feb 18 00:28:16.465: INFO: (2) /api/v1/namespaces/proxy-8623/pods/proxy-service-pl8hm-qbcxm/proxy/: test (200; 7.26258ms)
Feb 18 00:28:16.465: INFO: (2) /api/v1/namespaces/proxy-8623/services/http:proxy-service-pl8hm:portname2/proxy/: bar (200; 7.279304ms)
Feb 18 00:28:16.465: INFO: (2) /api/v1/namespaces/proxy-8623/services/https:proxy-service-pl8hm:tlsportname2/proxy/: tls qux (200; 7.245801ms)
Feb 18 00:28:16.470: INFO: (3) /api/v1/namespaces/proxy-8623/pods/proxy-service-pl8hm-qbcxm/proxy/: test (200; 4.809623ms)
Feb 18 00:28:16.470: INFO: (3) /api/v1/namespaces/proxy-8623/pods/http:proxy-service-pl8hm-qbcxm:1080/proxy/: ... (200; 4.901288ms)
Feb 18 00:28:16.471: INFO: (3) /api/v1/namespaces/proxy-8623/pods/proxy-service-pl8hm-qbcxm:162/proxy/: bar (200; 5.348384ms)
Feb 18 00:28:16.471: INFO: (3) /api/v1/namespaces/proxy-8623/pods/proxy-service-pl8hm-qbcxm:160/proxy/: foo (200; 5.409025ms)
Feb 18 00:28:16.471: INFO: (3) /api/v1/namespaces/proxy-8623/pods/https:proxy-service-pl8hm-qbcxm:460/proxy/: tls baz (200; 5.434896ms)
Feb 18 00:28:16.471: INFO: (3) /api/v1/namespaces/proxy-8623/pods/http:proxy-service-pl8hm-qbcxm:162/proxy/: bar (200; 5.422783ms)
Feb 18 00:28:16.471: INFO: (3) /api/v1/namespaces/proxy-8623/pods/proxy-service-pl8hm-qbcxm:1080/proxy/: test<... (200; 5.927973ms)
Feb 18 00:28:16.472: INFO: (3) /api/v1/namespaces/proxy-8623/pods/http:proxy-service-pl8hm-qbcxm:160/proxy/: foo (200; 6.022105ms)
Feb 18 00:28:16.472: INFO: (3) /api/v1/namespaces/proxy-8623/services/proxy-service-pl8hm:portname1/proxy/: foo (200; 5.981506ms)
Feb 18 00:28:16.472: INFO: (3) /api/v1/namespaces/proxy-8623/pods/https:proxy-service-pl8hm-qbcxm:462/proxy/: tls qux (200; 6.093095ms)
Feb 18 00:28:16.472: INFO: (3) /api/v1/namespaces/proxy-8623/pods/https:proxy-service-pl8hm-qbcxm:443/proxy/: test (200; 2.829403ms)
Feb 18 00:28:16.475: INFO: (4) /api/v1/namespaces/proxy-8623/pods/proxy-service-pl8hm-qbcxm:1080/proxy/: test<... (200; 3.620508ms)
Feb 18 00:28:16.478: INFO: (4) /api/v1/namespaces/proxy-8623/services/https:proxy-service-pl8hm:tlsportname2/proxy/: tls qux (200; 5.738974ms)
Feb 18 00:28:16.478: INFO: (4) /api/v1/namespaces/proxy-8623/services/https:proxy-service-pl8hm:tlsportname1/proxy/: tls baz (200; 5.893771ms)
Feb 18 00:28:16.478: INFO: (4) /api/v1/namespaces/proxy-8623/pods/http:proxy-service-pl8hm-qbcxm:160/proxy/: foo (200; 6.2676ms)
Feb 18 00:28:16.479: INFO: (4) /api/v1/namespaces/proxy-8623/services/proxy-service-pl8hm:portname2/proxy/: bar (200; 6.951571ms)
Feb 18 00:28:16.480: INFO: (4) /api/v1/namespaces/proxy-8623/pods/http:proxy-service-pl8hm-qbcxm:1080/proxy/: ... (200; 7.779723ms)
Feb 18 00:28:16.480: INFO: (4) /api/v1/namespaces/proxy-8623/pods/https:proxy-service-pl8hm-qbcxm:462/proxy/: tls qux (200; 8.152395ms)
Feb 18 00:28:16.480: INFO: (4) /api/v1/namespaces/proxy-8623/pods/https:proxy-service-pl8hm-qbcxm:460/proxy/: tls baz (200; 8.211831ms)
Feb 18 00:28:16.480: INFO: (4) /api/v1/namespaces/proxy-8623/pods/https:proxy-service-pl8hm-qbcxm:443/proxy/: ... (200; 2.230917ms)
Feb 18 00:28:16.486: INFO: (5) /api/v1/namespaces/proxy-8623/pods/https:proxy-service-pl8hm-qbcxm:443/proxy/: test<... (200; 4.512798ms)
Feb 18 00:28:16.488: INFO: (5) /api/v1/namespaces/proxy-8623/pods/proxy-service-pl8hm-qbcxm/proxy/: test (200; 4.552847ms)
Feb 18 00:28:16.488: INFO: (5) /api/v1/namespaces/proxy-8623/services/https:proxy-service-pl8hm:tlsportname1/proxy/: tls baz (200; 4.571068ms)
Feb 18 00:28:16.488: INFO: (5) /api/v1/namespaces/proxy-8623/pods/proxy-service-pl8hm-qbcxm:160/proxy/: foo (200; 4.817305ms)
Feb 18 00:28:16.488: INFO: (5) /api/v1/namespaces/proxy-8623/services/http:proxy-service-pl8hm:portname2/proxy/: bar (200; 4.8169ms)
Feb 18 00:28:16.488: INFO: (5) /api/v1/namespaces/proxy-8623/pods/https:proxy-service-pl8hm-qbcxm:460/proxy/: tls baz (200; 4.75038ms)
Feb 18 00:28:16.488: INFO: (5) /api/v1/namespaces/proxy-8623/services/proxy-service-pl8hm:portname2/proxy/: bar (200; 4.836103ms)
Feb 18 00:28:16.489: INFO: (5) /api/v1/namespaces/proxy-8623/services/https:proxy-service-pl8hm:tlsportname2/proxy/: tls qux (200; 5.053585ms)
Feb 18 00:28:16.489: INFO: (5) /api/v1/namespaces/proxy-8623/pods/https:proxy-service-pl8hm-qbcxm:462/proxy/: tls qux (200; 5.131212ms)
Feb 18 00:28:16.489: INFO: (5) /api/v1/namespaces/proxy-8623/services/proxy-service-pl8hm:portname1/proxy/: foo (200; 5.139499ms)
Feb 18 00:28:16.491: INFO: (6) /api/v1/namespaces/proxy-8623/pods/https:proxy-service-pl8hm-qbcxm:460/proxy/: tls baz (200; 1.932447ms)
Feb 18 00:28:16.492: INFO: (6) /api/v1/namespaces/proxy-8623/services/proxy-service-pl8hm:portname2/proxy/: bar (200; 3.285602ms)
Feb 18 00:28:16.492: INFO: (6) /api/v1/namespaces/proxy-8623/pods/proxy-service-pl8hm-qbcxm:1080/proxy/: test<... (200; 3.254112ms)
Feb 18 00:28:16.492: INFO: (6) /api/v1/namespaces/proxy-8623/pods/proxy-service-pl8hm-qbcxm:162/proxy/: bar (200; 3.273818ms)
Feb 18 00:28:16.492: INFO: (6) /api/v1/namespaces/proxy-8623/services/https:proxy-service-pl8hm:tlsportname2/proxy/: tls qux (200; 3.495576ms)
Feb 18 00:28:16.493: INFO: (6) /api/v1/namespaces/proxy-8623/pods/http:proxy-service-pl8hm-qbcxm:160/proxy/: foo (200; 4.215099ms)
Feb 18 00:28:16.493: INFO: (6) /api/v1/namespaces/proxy-8623/pods/proxy-service-pl8hm-qbcxm/proxy/: test (200; 4.296065ms)
Feb 18 00:28:16.493: INFO: (6) /api/v1/namespaces/proxy-8623/services/http:proxy-service-pl8hm:portname2/proxy/: bar (200; 4.270445ms)
Feb 18 00:28:16.493: INFO: (6) /api/v1/namespaces/proxy-8623/services/http:proxy-service-pl8hm:portname1/proxy/: foo (200; 4.323786ms)
Feb 18 00:28:16.493: INFO: (6) /api/v1/namespaces/proxy-8623/pods/https:proxy-service-pl8hm-qbcxm:443/proxy/: ... (200; 4.406808ms)
Feb 18 00:28:16.493: INFO: (6) /api/v1/namespaces/proxy-8623/pods/https:proxy-service-pl8hm-qbcxm:462/proxy/: tls qux (200; 4.34126ms)
Feb 18 00:28:16.493: INFO: (6) /api/v1/namespaces/proxy-8623/pods/proxy-service-pl8hm-qbcxm:160/proxy/: foo (200; 4.426209ms)
Feb 18 00:28:16.493: INFO: (6) /api/v1/namespaces/proxy-8623/services/proxy-service-pl8hm:portname1/proxy/: foo (200; 4.44169ms)
Feb 18 00:28:16.495: INFO: (7) /api/v1/namespaces/proxy-8623/pods/https:proxy-service-pl8hm-qbcxm:462/proxy/: tls qux (200; 2.007942ms)
Feb 18 00:28:16.495: INFO: (7) /api/v1/namespaces/proxy-8623/pods/https:proxy-service-pl8hm-qbcxm:443/proxy/: test (200; 2.29828ms)
Feb 18 00:28:16.497: INFO: (7) /api/v1/namespaces/proxy-8623/pods/http:proxy-service-pl8hm-qbcxm:160/proxy/: foo (200; 3.690981ms)
Feb 18 00:28:16.497: INFO: (7) /api/v1/namespaces/proxy-8623/services/https:proxy-service-pl8hm:tlsportname1/proxy/: tls baz (200; 4.118593ms)
Feb 18 00:28:16.497: INFO: (7) /api/v1/namespaces/proxy-8623/services/proxy-service-pl8hm:portname1/proxy/: foo (200; 4.075117ms)
Feb 18 00:28:16.497: INFO: (7) /api/v1/namespaces/proxy-8623/services/proxy-service-pl8hm:portname2/proxy/: bar (200; 4.073123ms)
Feb 18 00:28:16.498: INFO: (7) /api/v1/namespaces/proxy-8623/pods/http:proxy-service-pl8hm-qbcxm:162/proxy/: bar (200; 4.148933ms)
Feb 18 00:28:16.498: INFO: (7) /api/v1/namespaces/proxy-8623/pods/proxy-service-pl8hm-qbcxm:160/proxy/: foo (200; 4.199708ms)
Feb 18 00:28:16.498: INFO: (7) /api/v1/namespaces/proxy-8623/pods/proxy-service-pl8hm-qbcxm:1080/proxy/: test<... (200; 4.40658ms)
Feb 18 00:28:16.498: INFO: (7) /api/v1/namespaces/proxy-8623/pods/http:proxy-service-pl8hm-qbcxm:1080/proxy/: ... (200; 4.564831ms)
Feb 18 00:28:16.498: INFO: (7) /api/v1/namespaces/proxy-8623/pods/proxy-service-pl8hm-qbcxm:162/proxy/: bar (200; 4.500608ms)
Feb 18 00:28:16.498: INFO: (7) /api/v1/namespaces/proxy-8623/services/http:proxy-service-pl8hm:portname2/proxy/: bar (200; 4.520006ms)
Feb 18 00:28:16.498: INFO: (7) /api/v1/namespaces/proxy-8623/pods/https:proxy-service-pl8hm-qbcxm:460/proxy/: tls baz (200; 4.574636ms)
Feb 18 00:28:16.498: INFO: (7) /api/v1/namespaces/proxy-8623/services/http:proxy-service-pl8hm:portname1/proxy/: foo (200; 4.570214ms)
Feb 18 00:28:16.498: INFO: (7) /api/v1/namespaces/proxy-8623/services/https:proxy-service-pl8hm:tlsportname2/proxy/: tls qux (200; 4.578553ms)
Feb 18 00:28:16.501: INFO: (8) /api/v1/namespaces/proxy-8623/pods/https:proxy-service-pl8hm-qbcxm:462/proxy/: tls qux (200; 2.52827ms)
Feb 18 00:28:16.501: INFO: (8) /api/v1/namespaces/proxy-8623/pods/proxy-service-pl8hm-qbcxm:160/proxy/: foo (200; 2.546935ms)
Feb 18 00:28:16.501: INFO: (8) /api/v1/namespaces/proxy-8623/pods/proxy-service-pl8hm-qbcxm/proxy/: test (200; 2.606677ms)
Feb 18 00:28:16.502: INFO: (8) /api/v1/namespaces/proxy-8623/pods/proxy-service-pl8hm-qbcxm:1080/proxy/: test<... (200; 4.150435ms)
Feb 18 00:28:16.502: INFO: (8) /api/v1/namespaces/proxy-8623/pods/http:proxy-service-pl8hm-qbcxm:162/proxy/: bar (200; 4.267375ms)
Feb 18 00:28:16.502: INFO: (8) /api/v1/namespaces/proxy-8623/pods/https:proxy-service-pl8hm-qbcxm:443/proxy/: ... (200; 4.246005ms)
Feb 18 00:28:16.502: INFO: (8) /api/v1/namespaces/proxy-8623/pods/https:proxy-service-pl8hm-qbcxm:460/proxy/: tls baz (200; 4.33311ms)
Feb 18 00:28:16.503: INFO: (8) /api/v1/namespaces/proxy-8623/services/http:proxy-service-pl8hm:portname2/proxy/: bar (200; 5.380267ms)
Feb 18 00:28:16.504: INFO: (8) /api/v1/namespaces/proxy-8623/services/proxy-service-pl8hm:portname1/proxy/: foo (200; 5.521107ms)
Feb 18 00:28:16.504: INFO: (8) /api/v1/namespaces/proxy-8623/services/http:proxy-service-pl8hm:portname1/proxy/: foo (200; 5.530825ms)
Feb 18 00:28:16.504: INFO: (8) /api/v1/namespaces/proxy-8623/services/https:proxy-service-pl8hm:tlsportname1/proxy/: tls baz (200; 5.590642ms)
Feb 18 00:28:16.504: INFO: (8) /api/v1/namespaces/proxy-8623/services/proxy-service-pl8hm:portname2/proxy/: bar (200; 5.563365ms)
Feb 18 00:28:16.504: INFO: (8) /api/v1/namespaces/proxy-8623/services/https:proxy-service-pl8hm:tlsportname2/proxy/: tls qux (200; 5.562185ms)
Feb 18 00:28:16.506: INFO: (9) /api/v1/namespaces/proxy-8623/pods/proxy-service-pl8hm-qbcxm/proxy/: test (200; 2.014479ms)
Feb 18 00:28:16.506: INFO: (9) /api/v1/namespaces/proxy-8623/pods/proxy-service-pl8hm-qbcxm:162/proxy/: bar (200; 2.364459ms)
Feb 18 00:28:16.507: INFO: (9) /api/v1/namespaces/proxy-8623/pods/http:proxy-service-pl8hm-qbcxm:162/proxy/: bar (200; 3.143178ms)
Feb 18 00:28:16.508: INFO: (9) /api/v1/namespaces/proxy-8623/services/proxy-service-pl8hm:portname2/proxy/: bar (200; 3.753084ms)
Feb 18 00:28:16.508: INFO: (9) /api/v1/namespaces/proxy-8623/services/https:proxy-service-pl8hm:tlsportname2/proxy/: tls qux (200; 3.904034ms)
Feb 18 00:28:16.508: INFO: (9) /api/v1/namespaces/proxy-8623/pods/https:proxy-service-pl8hm-qbcxm:443/proxy/: test<... (200; 4.69378ms)
Feb 18 00:28:16.508: INFO: (9) /api/v1/namespaces/proxy-8623/services/http:proxy-service-pl8hm:portname1/proxy/: foo (200; 4.69313ms)
Feb 18 00:28:16.509: INFO: (9) /api/v1/namespaces/proxy-8623/pods/http:proxy-service-pl8hm-qbcxm:1080/proxy/: ... (200; 4.82848ms)
Feb 18 00:28:16.509: INFO: (9) /api/v1/namespaces/proxy-8623/services/http:proxy-service-pl8hm:portname2/proxy/: bar (200; 4.904484ms)
Feb 18 00:28:16.509: INFO: (9) /api/v1/namespaces/proxy-8623/pods/proxy-service-pl8hm-qbcxm:160/proxy/: foo (200; 4.880364ms)
Feb 18 00:28:16.509: INFO: (9) /api/v1/namespaces/proxy-8623/services/https:proxy-service-pl8hm:tlsportname1/proxy/: tls baz (200; 4.88934ms)
Feb 18 00:28:16.509: INFO: (9) /api/v1/namespaces/proxy-8623/pods/https:proxy-service-pl8hm-qbcxm:460/proxy/: tls baz (200; 4.931272ms)
Feb 18 00:28:16.509: INFO: (9) /api/v1/namespaces/proxy-8623/services/proxy-service-pl8hm:portname1/proxy/: foo (200; 4.895672ms)
Feb 18 00:28:16.509: INFO: (9) /api/v1/namespaces/proxy-8623/pods/https:proxy-service-pl8hm-qbcxm:462/proxy/: tls qux (200; 4.864995ms)
Feb 18 00:28:16.509: INFO: (9) /api/v1/namespaces/proxy-8623/pods/http:proxy-service-pl8hm-qbcxm:160/proxy/: foo (200; 4.891659ms)
Feb 18 00:28:16.512: INFO: (10) /api/v1/namespaces/proxy-8623/pods/http:proxy-service-pl8hm-qbcxm:162/proxy/: bar (200; 3.015478ms)
Feb 18 00:28:16.512: INFO: (10) /api/v1/namespaces/proxy-8623/pods/proxy-service-pl8hm-qbcxm:162/proxy/: bar (200; 3.113436ms)
Feb 18 00:28:16.512: INFO: (10) /api/v1/namespaces/proxy-8623/pods/proxy-service-pl8hm-qbcxm:1080/proxy/: test<... (200; 3.116966ms)
Feb 18 00:28:16.513: INFO: (10) /api/v1/namespaces/proxy-8623/pods/proxy-service-pl8hm-qbcxm:160/proxy/: foo (200; 4.666664ms)
Feb 18 00:28:16.513: INFO: (10) /api/v1/namespaces/proxy-8623/pods/http:proxy-service-pl8hm-qbcxm:160/proxy/: foo (200; 4.68195ms)
Feb 18 00:28:16.513: INFO: (10) /api/v1/namespaces/proxy-8623/pods/proxy-service-pl8hm-qbcxm/proxy/: test (200; 4.741342ms)
Feb 18 00:28:16.514: INFO: (10) /api/v1/namespaces/proxy-8623/pods/https:proxy-service-pl8hm-qbcxm:462/proxy/: tls qux (200; 4.701613ms)
Feb 18 00:28:16.514: INFO: (10) /api/v1/namespaces/proxy-8623/services/https:proxy-service-pl8hm:tlsportname2/proxy/: tls qux (200; 4.734866ms)
Feb 18 00:28:16.514: INFO: (10) /api/v1/namespaces/proxy-8623/services/proxy-service-pl8hm:portname2/proxy/: bar (200; 4.723591ms)
Feb 18 00:28:16.514: INFO: (10) /api/v1/namespaces/proxy-8623/pods/https:proxy-service-pl8hm-qbcxm:443/proxy/: ... (200; 4.863224ms)
Feb 18 00:28:16.517: INFO: (11) /api/v1/namespaces/proxy-8623/pods/http:proxy-service-pl8hm-qbcxm:162/proxy/: bar (200; 3.066543ms)
Feb 18 00:28:16.517: INFO: (11) /api/v1/namespaces/proxy-8623/pods/https:proxy-service-pl8hm-qbcxm:462/proxy/: tls qux (200; 3.074906ms)
Feb 18 00:28:16.517: INFO: (11) /api/v1/namespaces/proxy-8623/pods/proxy-service-pl8hm-qbcxm:1080/proxy/: test<... (200; 3.176344ms)
Feb 18 00:28:16.520: INFO: (11) /api/v1/namespaces/proxy-8623/pods/http:proxy-service-pl8hm-qbcxm:1080/proxy/: ... (200; 5.959922ms)
Feb 18 00:28:16.520: INFO: (11) /api/v1/namespaces/proxy-8623/pods/proxy-service-pl8hm-qbcxm:162/proxy/: bar (200; 5.930815ms)
Feb 18 00:28:16.520: INFO: (11) /api/v1/namespaces/proxy-8623/services/https:proxy-service-pl8hm:tlsportname1/proxy/: tls baz (200; 5.987557ms)
Feb 18 00:28:16.520: INFO: (11) /api/v1/namespaces/proxy-8623/services/http:proxy-service-pl8hm:portname1/proxy/: foo (200; 6.08884ms)
Feb 18 00:28:16.520: INFO: (11) /api/v1/namespaces/proxy-8623/services/https:proxy-service-pl8hm:tlsportname2/proxy/: tls qux (200; 6.097928ms)
Feb 18 00:28:16.520: INFO: (11) /api/v1/namespaces/proxy-8623/pods/http:proxy-service-pl8hm-qbcxm:160/proxy/: foo (200; 6.035679ms)
Feb 18 00:28:16.520: INFO: (11) /api/v1/namespaces/proxy-8623/pods/https:proxy-service-pl8hm-qbcxm:460/proxy/: tls baz (200; 6.087057ms)
Feb 18 00:28:16.520: INFO: (11) /api/v1/namespaces/proxy-8623/pods/https:proxy-service-pl8hm-qbcxm:443/proxy/: test (200; 6.141439ms)
Feb 18 00:28:16.520: INFO: (11) /api/v1/namespaces/proxy-8623/pods/proxy-service-pl8hm-qbcxm:160/proxy/: foo (200; 6.12472ms)
Feb 18 00:28:16.520: INFO: (11) /api/v1/namespaces/proxy-8623/services/proxy-service-pl8hm:portname2/proxy/: bar (200; 6.185105ms)
Feb 18 00:28:16.524: INFO: (12) /api/v1/namespaces/proxy-8623/pods/https:proxy-service-pl8hm-qbcxm:443/proxy/: test<... (200; 3.900604ms)
Feb 18 00:28:16.524: INFO: (12) /api/v1/namespaces/proxy-8623/pods/proxy-service-pl8hm-qbcxm/proxy/: test (200; 3.884229ms)
Feb 18 00:28:16.524: INFO: (12) /api/v1/namespaces/proxy-8623/pods/proxy-service-pl8hm-qbcxm:160/proxy/: foo (200; 3.974438ms)
Feb 18 00:28:16.524: INFO: (12) /api/v1/namespaces/proxy-8623/pods/proxy-service-pl8hm-qbcxm:162/proxy/: bar (200; 3.914572ms)
Feb 18 00:28:16.524: INFO: (12) /api/v1/namespaces/proxy-8623/pods/https:proxy-service-pl8hm-qbcxm:462/proxy/: tls qux (200; 3.9845ms)
Feb 18 00:28:16.524: INFO: (12) /api/v1/namespaces/proxy-8623/pods/http:proxy-service-pl8hm-qbcxm:162/proxy/: bar (200; 3.951568ms)
Feb 18 00:28:16.524: INFO: (12) /api/v1/namespaces/proxy-8623/pods/http:proxy-service-pl8hm-qbcxm:160/proxy/: foo (200; 3.987763ms)
Feb 18 00:28:16.524: INFO: (12) /api/v1/namespaces/proxy-8623/services/https:proxy-service-pl8hm:tlsportname1/proxy/: tls baz (200; 4.37538ms)
Feb 18 00:28:16.525: INFO: (12) /api/v1/namespaces/proxy-8623/services/http:proxy-service-pl8hm:portname1/proxy/: foo (200; 4.688504ms)
Feb 18 00:28:16.525: INFO: (12) /api/v1/namespaces/proxy-8623/services/http:proxy-service-pl8hm:portname2/proxy/: bar (200; 4.591595ms)
Feb 18 00:28:16.525: INFO: (12) /api/v1/namespaces/proxy-8623/services/proxy-service-pl8hm:portname2/proxy/: bar (200; 4.646895ms)
Feb 18 00:28:16.525: INFO: (12) /api/v1/namespaces/proxy-8623/services/proxy-service-pl8hm:portname1/proxy/: foo (200; 4.713307ms)
Feb 18 00:28:16.525: INFO: (12) /api/v1/namespaces/proxy-8623/services/https:proxy-service-pl8hm:tlsportname2/proxy/: tls qux (200; 4.671757ms)
Feb 18 00:28:16.525: INFO: (12) /api/v1/namespaces/proxy-8623/pods/http:proxy-service-pl8hm-qbcxm:1080/proxy/: ... (200; 4.740028ms)
Feb 18 00:28:16.529: INFO: (13) /api/v1/namespaces/proxy-8623/pods/http:proxy-service-pl8hm-qbcxm:162/proxy/: bar (200; 4.046117ms)
Feb 18 00:28:16.529: INFO: (13) /api/v1/namespaces/proxy-8623/pods/https:proxy-service-pl8hm-qbcxm:460/proxy/: tls baz (200; 3.972277ms)
Feb 18 00:28:16.530: INFO: (13) /api/v1/namespaces/proxy-8623/services/proxy-service-pl8hm:portname2/proxy/: bar (200; 5.603452ms)
Feb 18 00:28:16.530: INFO: (13) /api/v1/namespaces/proxy-8623/pods/proxy-service-pl8hm-qbcxm:160/proxy/: foo (200; 5.350343ms)
Feb 18 00:28:16.531: INFO: (13) /api/v1/namespaces/proxy-8623/pods/https:proxy-service-pl8hm-qbcxm:443/proxy/: ... (200; 5.595294ms)
Feb 18 00:28:16.531: INFO: (13) /api/v1/namespaces/proxy-8623/pods/proxy-service-pl8hm-qbcxm/proxy/: test (200; 5.869067ms)
Feb 18 00:28:16.531: INFO: (13) /api/v1/namespaces/proxy-8623/services/https:proxy-service-pl8hm:tlsportname2/proxy/: tls qux (200; 6.182336ms)
Feb 18 00:28:16.531: INFO: (13) /api/v1/namespaces/proxy-8623/services/https:proxy-service-pl8hm:tlsportname1/proxy/: tls baz (200; 6.218998ms)
Feb 18 00:28:16.531: INFO: (13) /api/v1/namespaces/proxy-8623/pods/http:proxy-service-pl8hm-qbcxm:160/proxy/: foo (200; 5.983299ms)
Feb 18 00:28:16.531: INFO: (13) /api/v1/namespaces/proxy-8623/pods/proxy-service-pl8hm-qbcxm:162/proxy/: bar (200; 5.975828ms)
Feb 18 00:28:16.531: INFO: (13) /api/v1/namespaces/proxy-8623/services/proxy-service-pl8hm:portname1/proxy/: foo (200; 6.232313ms)
Feb 18 00:28:16.531: INFO: (13) /api/v1/namespaces/proxy-8623/pods/proxy-service-pl8hm-qbcxm:1080/proxy/: test<... (200; 6.130017ms)
Feb 18 00:28:16.531: INFO: (13) /api/v1/namespaces/proxy-8623/pods/https:proxy-service-pl8hm-qbcxm:462/proxy/: tls qux (200; 6.260429ms)
Feb 18 00:28:16.534: INFO: (14) /api/v1/namespaces/proxy-8623/pods/https:proxy-service-pl8hm-qbcxm:443/proxy/: test<... (200; 6.660651ms)
Feb 18 00:28:16.538: INFO: (14) /api/v1/namespaces/proxy-8623/pods/proxy-service-pl8hm-qbcxm:162/proxy/: bar (200; 6.680787ms)
Feb 18 00:28:16.538: INFO: (14) /api/v1/namespaces/proxy-8623/pods/http:proxy-service-pl8hm-qbcxm:1080/proxy/: ... (200; 6.705267ms)
Feb 18 00:28:16.538: INFO: (14) /api/v1/namespaces/proxy-8623/pods/http:proxy-service-pl8hm-qbcxm:162/proxy/: bar (200; 6.766873ms)
Feb 18 00:28:16.538: INFO: (14) /api/v1/namespaces/proxy-8623/pods/proxy-service-pl8hm-qbcxm:160/proxy/: foo (200; 6.875402ms)
Feb 18 00:28:16.538: INFO: (14) /api/v1/namespaces/proxy-8623/pods/http:proxy-service-pl8hm-qbcxm:160/proxy/: foo (200; 6.93183ms)
Feb 18 00:28:16.538: INFO: (14) /api/v1/namespaces/proxy-8623/pods/https:proxy-service-pl8hm-qbcxm:462/proxy/: tls qux (200; 6.872928ms)
Feb 18 00:28:16.538: INFO: (14) /api/v1/namespaces/proxy-8623/services/https:proxy-service-pl8hm:tlsportname1/proxy/: tls baz (200; 6.968946ms)
Feb 18 00:28:16.539: INFO: (14) /api/v1/namespaces/proxy-8623/pods/proxy-service-pl8hm-qbcxm/proxy/: test (200; 7.869279ms)
Feb 18 00:28:16.540: INFO: (14) /api/v1/namespaces/proxy-8623/services/proxy-service-pl8hm:portname2/proxy/: bar (200; 8.638824ms)
Feb 18 00:28:16.540: INFO: (14) /api/v1/namespaces/proxy-8623/services/http:proxy-service-pl8hm:portname1/proxy/: foo (200; 8.756926ms)
Feb 18 00:28:16.540: INFO: (14) /api/v1/namespaces/proxy-8623/services/https:proxy-service-pl8hm:tlsportname2/proxy/: tls qux (200; 8.769826ms)
Feb 18 00:28:16.540: INFO: (14) /api/v1/namespaces/proxy-8623/services/http:proxy-service-pl8hm:portname2/proxy/: bar (200; 8.771011ms)
Feb 18 00:28:16.540: INFO: (14) /api/v1/namespaces/proxy-8623/services/proxy-service-pl8hm:portname1/proxy/: foo (200; 8.934742ms)
Feb 18 00:28:16.540: INFO: (14) /api/v1/namespaces/proxy-8623/pods/https:proxy-service-pl8hm-qbcxm:460/proxy/: tls baz (200; 9.041375ms)
Feb 18 00:28:16.543: INFO: (15) /api/v1/namespaces/proxy-8623/pods/http:proxy-service-pl8hm-qbcxm:1080/proxy/: ... (200; 2.153126ms)
Feb 18 00:28:16.544: INFO: (15) /api/v1/namespaces/proxy-8623/pods/https:proxy-service-pl8hm-qbcxm:462/proxy/: tls qux (200; 2.990259ms)
Feb 18 00:28:16.544: INFO: (15) /api/v1/namespaces/proxy-8623/pods/https:proxy-service-pl8hm-qbcxm:443/proxy/: test (200; 3.252038ms)
Feb 18 00:28:16.544: INFO: (15) /api/v1/namespaces/proxy-8623/pods/proxy-service-pl8hm-qbcxm:160/proxy/: foo (200; 3.278173ms)
Feb 18 00:28:16.544: INFO: (15) /api/v1/namespaces/proxy-8623/services/http:proxy-service-pl8hm:portname1/proxy/: foo (200; 3.566169ms)
Feb 18 00:28:16.544: INFO: (15) /api/v1/namespaces/proxy-8623/pods/proxy-service-pl8hm-qbcxm:162/proxy/: bar (200; 3.495383ms)
Feb 18 00:28:16.545: INFO: (15) /api/v1/namespaces/proxy-8623/services/https:proxy-service-pl8hm:tlsportname2/proxy/: tls qux (200; 4.516683ms)
Feb 18 00:28:16.545: INFO: (15) /api/v1/namespaces/proxy-8623/services/http:proxy-service-pl8hm:portname2/proxy/: bar (200; 4.699731ms)
Feb 18 00:28:16.545: INFO: (15) /api/v1/namespaces/proxy-8623/services/https:proxy-service-pl8hm:tlsportname1/proxy/: tls baz (200; 4.94324ms)
Feb 18 00:28:16.546: INFO: (15) /api/v1/namespaces/proxy-8623/services/proxy-service-pl8hm:portname2/proxy/: bar (200; 5.051724ms)
Feb 18 00:28:16.546: INFO: (15) /api/v1/namespaces/proxy-8623/services/proxy-service-pl8hm:portname1/proxy/: foo (200; 5.041711ms)
Feb 18 00:28:16.546: INFO: (15) /api/v1/namespaces/proxy-8623/pods/proxy-service-pl8hm-qbcxm:1080/proxy/: test<... (200; 5.597515ms)
Feb 18 00:28:16.549: INFO: (16) /api/v1/namespaces/proxy-8623/pods/http:proxy-service-pl8hm-qbcxm:162/proxy/: bar (200; 2.607576ms)
Feb 18 00:28:16.549: INFO: (16) /api/v1/namespaces/proxy-8623/pods/proxy-service-pl8hm-qbcxm:160/proxy/: foo (200; 2.659384ms)
Feb 18 00:28:16.549: INFO: (16) /api/v1/namespaces/proxy-8623/pods/http:proxy-service-pl8hm-qbcxm:160/proxy/: foo (200; 2.805898ms)
Feb 18 00:28:16.549: INFO: (16) /api/v1/namespaces/proxy-8623/pods/proxy-service-pl8hm-qbcxm/proxy/: test (200; 2.780592ms)
Feb 18 00:28:16.549: INFO: (16) /api/v1/namespaces/proxy-8623/pods/proxy-service-pl8hm-qbcxm:1080/proxy/: test<... (200; 2.79053ms)
Feb 18 00:28:16.549: INFO: (16) /api/v1/namespaces/proxy-8623/pods/proxy-service-pl8hm-qbcxm:162/proxy/: bar (200; 2.953081ms)
Feb 18 00:28:16.549: INFO: (16) /api/v1/namespaces/proxy-8623/pods/https:proxy-service-pl8hm-qbcxm:443/proxy/: ... (200; 3.18413ms)
Feb 18 00:28:16.550: INFO: (16) /api/v1/namespaces/proxy-8623/services/http:proxy-service-pl8hm:portname1/proxy/: foo (200; 4.025584ms)
Feb 18 00:28:16.550: INFO: (16) /api/v1/namespaces/proxy-8623/services/proxy-service-pl8hm:portname2/proxy/: bar (200; 4.108511ms)
Feb 18 00:28:16.550: INFO: (16) /api/v1/namespaces/proxy-8623/services/http:proxy-service-pl8hm:portname2/proxy/: bar (200; 4.103328ms)
Feb 18 00:28:16.550: INFO: (16) /api/v1/namespaces/proxy-8623/services/proxy-service-pl8hm:portname1/proxy/: foo (200; 4.118761ms)
Feb 18 00:28:16.550: INFO: (16) /api/v1/namespaces/proxy-8623/services/https:proxy-service-pl8hm:tlsportname2/proxy/: tls qux (200; 4.273821ms)
Feb 18 00:28:16.551: INFO: (16) /api/v1/namespaces/proxy-8623/services/https:proxy-service-pl8hm:tlsportname1/proxy/: tls baz (200; 4.262214ms)
Feb 18 00:28:16.553: INFO: (17) /api/v1/namespaces/proxy-8623/pods/proxy-service-pl8hm-qbcxm:162/proxy/: bar (200; 2.3841ms)
Feb 18 00:28:16.553: INFO: (17) /api/v1/namespaces/proxy-8623/pods/proxy-service-pl8hm-qbcxm/proxy/: test (200; 2.573346ms)
Feb 18 00:28:16.553: INFO: (17) /api/v1/namespaces/proxy-8623/pods/https:proxy-service-pl8hm-qbcxm:460/proxy/: tls baz (200; 2.651878ms)
Feb 18 00:28:16.553: INFO: (17) /api/v1/namespaces/proxy-8623/pods/http:proxy-service-pl8hm-qbcxm:160/proxy/: foo (200; 2.633187ms)
Feb 18 00:28:16.554: INFO: (17) /api/v1/namespaces/proxy-8623/services/proxy-service-pl8hm:portname2/proxy/: bar (200; 3.129382ms)
Feb 18 00:28:16.554: INFO: (17) /api/v1/namespaces/proxy-8623/pods/proxy-service-pl8hm-qbcxm:1080/proxy/: test<... (200; 3.053115ms)
Feb 18 00:28:16.554: INFO: (17) /api/v1/namespaces/proxy-8623/pods/http:proxy-service-pl8hm-qbcxm:1080/proxy/: ... (200; 3.33799ms)
Feb 18 00:28:16.554: INFO: (17) /api/v1/namespaces/proxy-8623/pods/proxy-service-pl8hm-qbcxm:160/proxy/: foo (200; 3.291587ms)
Feb 18 00:28:16.554: INFO: (17) /api/v1/namespaces/proxy-8623/pods/https:proxy-service-pl8hm-qbcxm:443/proxy/: test<... (200; 3.24792ms)
Feb 18 00:28:16.558: INFO: (18) /api/v1/namespaces/proxy-8623/pods/http:proxy-service-pl8hm-qbcxm:160/proxy/: foo (200; 3.60778ms)
Feb 18 00:28:16.558: INFO: (18) /api/v1/namespaces/proxy-8623/pods/http:proxy-service-pl8hm-qbcxm:162/proxy/: bar (200; 3.728742ms)
Feb 18 00:28:16.558: INFO: (18) /api/v1/namespaces/proxy-8623/pods/https:proxy-service-pl8hm-qbcxm:443/proxy/: test (200; 4.09502ms)
Feb 18 00:28:16.559: INFO: (18) /api/v1/namespaces/proxy-8623/services/proxy-service-pl8hm:portname2/proxy/: bar (200; 4.167128ms)
Feb 18 00:28:16.559: INFO: (18) /api/v1/namespaces/proxy-8623/services/https:proxy-service-pl8hm:tlsportname2/proxy/: tls qux (200; 4.154848ms)
Feb 18 00:28:16.559: INFO: (18) /api/v1/namespaces/proxy-8623/pods/http:proxy-service-pl8hm-qbcxm:1080/proxy/: ... (200; 4.125859ms)
Feb 18 00:28:16.559: INFO: (18) /api/v1/namespaces/proxy-8623/services/http:proxy-service-pl8hm:portname2/proxy/: bar (200; 4.179089ms)
Feb 18 00:28:16.563: INFO: (19) /api/v1/namespaces/proxy-8623/pods/http:proxy-service-pl8hm-qbcxm:1080/proxy/: ... (200; 3.988147ms)
Feb 18 00:28:16.563: INFO: (19) /api/v1/namespaces/proxy-8623/services/http:proxy-service-pl8hm:portname2/proxy/: bar (200; 4.060758ms)
Feb 18 00:28:16.563: INFO: (19) /api/v1/namespaces/proxy-8623/pods/https:proxy-service-pl8hm-qbcxm:460/proxy/: tls baz (200; 4.165661ms)
Feb 18 00:28:16.563: INFO: (19) /api/v1/namespaces/proxy-8623/pods/proxy-service-pl8hm-qbcxm/proxy/: test (200; 4.203383ms)
Feb 18 00:28:16.563: INFO: (19) /api/v1/namespaces/proxy-8623/services/https:proxy-service-pl8hm:tlsportname1/proxy/: tls baz (200; 4.616244ms)
Feb 18 00:28:16.563: INFO: (19) /api/v1/namespaces/proxy-8623/pods/http:proxy-service-pl8hm-qbcxm:160/proxy/: foo (200; 4.628904ms)
Feb 18 00:28:16.564: INFO: (19) /api/v1/namespaces/proxy-8623/pods/proxy-service-pl8hm-qbcxm:160/proxy/: foo (200; 4.681416ms)
Feb 18 00:28:16.564: INFO: (19) /api/v1/namespaces/proxy-8623/pods/proxy-service-pl8hm-qbcxm:162/proxy/: bar (200; 4.824905ms)
Feb 18 00:28:16.564: INFO: (19) /api/v1/namespaces/proxy-8623/pods/proxy-service-pl8hm-qbcxm:1080/proxy/: test<... (200; 4.841332ms)
Feb 18 00:28:16.564: INFO: (19) /api/v1/namespaces/proxy-8623/pods/https:proxy-service-pl8hm-qbcxm:443/proxy/: >> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Feb 18 00:28:33.520: INFO: Successfully updated pod "annotationupdate636c26c8-dc1a-4c86-bc31-4a2474d98386"
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:28:37.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-436" for this suite.
Feb 18 00:29:01.688: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:29:01.764: INFO: namespace projected-436 deletion completed in 24.198320645s

• [SLOW TEST:32.893 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:29:01.765: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-4a76e626-26b0-43d8-b963-4a9cb49b5a3c
STEP: Creating a pod to test consume secrets
Feb 18 00:29:03.854: INFO: Waiting up to 5m0s for pod "pod-secrets-771c6bb8-9561-4bff-a1aa-8c75477de164" in namespace "secrets-8727" to be "success or failure"
Feb 18 00:29:04.115: INFO: Pod "pod-secrets-771c6bb8-9561-4bff-a1aa-8c75477de164": Phase="Pending", Reason="", readiness=false. Elapsed: 261.384914ms
Feb 18 00:29:06.118: INFO: Pod "pod-secrets-771c6bb8-9561-4bff-a1aa-8c75477de164": Phase="Pending", Reason="", readiness=false. Elapsed: 2.264317403s
Feb 18 00:29:08.125: INFO: Pod "pod-secrets-771c6bb8-9561-4bff-a1aa-8c75477de164": Phase="Pending", Reason="", readiness=false. Elapsed: 4.270930903s
Feb 18 00:29:10.233: INFO: Pod "pod-secrets-771c6bb8-9561-4bff-a1aa-8c75477de164": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.37908098s
STEP: Saw pod success
Feb 18 00:29:10.233: INFO: Pod "pod-secrets-771c6bb8-9561-4bff-a1aa-8c75477de164" satisfied condition "success or failure"
Feb 18 00:29:10.235: INFO: Trying to get logs from node iruya-worker pod pod-secrets-771c6bb8-9561-4bff-a1aa-8c75477de164 container secret-volume-test: 
STEP: delete the pod
Feb 18 00:29:10.284: INFO: Waiting for pod pod-secrets-771c6bb8-9561-4bff-a1aa-8c75477de164 to disappear
Feb 18 00:29:10.288: INFO: Pod pod-secrets-771c6bb8-9561-4bff-a1aa-8c75477de164 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:29:10.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8727" for this suite.
Feb 18 00:29:16.304: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:29:16.393: INFO: namespace secrets-8727 deletion completed in 6.10118878s
STEP: Destroying namespace "secret-namespace-3611" for this suite.
Feb 18 00:29:22.411: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:29:22.496: INFO: namespace secret-namespace-3611 deletion completed in 6.103259147s

• [SLOW TEST:20.732 seconds]
[sig-storage] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:29:22.496: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 18 00:29:22.576: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a62c8f87-9faf-498f-a7a1-1fb2ced8defe" in namespace "projected-8473" to be "success or failure"
Feb 18 00:29:22.595: INFO: Pod "downwardapi-volume-a62c8f87-9faf-498f-a7a1-1fb2ced8defe": Phase="Pending", Reason="", readiness=false. Elapsed: 18.787224ms
Feb 18 00:29:24.599: INFO: Pod "downwardapi-volume-a62c8f87-9faf-498f-a7a1-1fb2ced8defe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022948816s
Feb 18 00:29:26.603: INFO: Pod "downwardapi-volume-a62c8f87-9faf-498f-a7a1-1fb2ced8defe": Phase="Running", Reason="", readiness=true. Elapsed: 4.026866508s
Feb 18 00:29:28.608: INFO: Pod "downwardapi-volume-a62c8f87-9faf-498f-a7a1-1fb2ced8defe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.031109786s
STEP: Saw pod success
Feb 18 00:29:28.608: INFO: Pod "downwardapi-volume-a62c8f87-9faf-498f-a7a1-1fb2ced8defe" satisfied condition "success or failure"
Feb 18 00:29:28.611: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-a62c8f87-9faf-498f-a7a1-1fb2ced8defe container client-container: 
STEP: delete the pod
Feb 18 00:29:28.650: INFO: Waiting for pod downwardapi-volume-a62c8f87-9faf-498f-a7a1-1fb2ced8defe to disappear
Feb 18 00:29:28.665: INFO: Pod downwardapi-volume-a62c8f87-9faf-498f-a7a1-1fb2ced8defe no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:29:28.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8473" for this suite.
Feb 18 00:29:34.680: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:29:34.773: INFO: namespace projected-8473 deletion completed in 6.104335914s

• [SLOW TEST:12.276 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:29:34.773: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-1424.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-1424.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1424.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-1424.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-1424.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1424.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 18 00:29:42.900: INFO: DNS probes using dns-1424/dns-test-afbb6843-0509-40ae-afea-79ed41657739 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:29:43.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-1424" for this suite.
Feb 18 00:29:49.118: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:29:49.225: INFO: namespace dns-1424 deletion completed in 6.144524129s

• [SLOW TEST:14.452 seconds]
[sig-network] DNS
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:29:49.226: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-27710f4c-5909-4ca4-8714-6743f69a551c
STEP: Creating a pod to test consume configMaps
Feb 18 00:29:49.363: INFO: Waiting up to 5m0s for pod "pod-configmaps-7fa67653-629a-4cbc-bef7-57add9d0bfc0" in namespace "configmap-8879" to be "success or failure"
Feb 18 00:29:49.449: INFO: Pod "pod-configmaps-7fa67653-629a-4cbc-bef7-57add9d0bfc0": Phase="Pending", Reason="", readiness=false. Elapsed: 85.827785ms
Feb 18 00:29:51.484: INFO: Pod "pod-configmaps-7fa67653-629a-4cbc-bef7-57add9d0bfc0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.121563572s
Feb 18 00:29:53.900: INFO: Pod "pod-configmaps-7fa67653-629a-4cbc-bef7-57add9d0bfc0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.537148174s
Feb 18 00:29:55.904: INFO: Pod "pod-configmaps-7fa67653-629a-4cbc-bef7-57add9d0bfc0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.540859902s
STEP: Saw pod success
Feb 18 00:29:55.904: INFO: Pod "pod-configmaps-7fa67653-629a-4cbc-bef7-57add9d0bfc0" satisfied condition "success or failure"
Feb 18 00:29:55.907: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-7fa67653-629a-4cbc-bef7-57add9d0bfc0 container configmap-volume-test: 
STEP: delete the pod
Feb 18 00:29:56.253: INFO: Waiting for pod pod-configmaps-7fa67653-629a-4cbc-bef7-57add9d0bfc0 to disappear
Feb 18 00:29:56.478: INFO: Pod pod-configmaps-7fa67653-629a-4cbc-bef7-57add9d0bfc0 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:29:56.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8879" for this suite.
Feb 18 00:30:02.561: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:30:02.682: INFO: namespace configmap-8879 deletion completed in 6.199426369s

• [SLOW TEST:13.456 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:30:02.682: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:30:06.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-9101" for this suite.
Feb 18 00:30:13.048: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:30:13.144: INFO: namespace emptydir-wrapper-9101 deletion completed in 6.12341394s

• [SLOW TEST:10.462 seconds]
[sig-storage] EmptyDir wrapper volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:30:13.144: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating cluster-info
Feb 18 00:30:13.199: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Feb 18 00:30:15.788: INFO: stderr: ""
Feb 18 00:30:15.788: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:37703\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:37703/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:30:15.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3832" for this suite.
Feb 18 00:30:21.808: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:30:21.899: INFO: namespace kubectl-3832 deletion completed in 6.107413689s

• [SLOW TEST:8.755 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl cluster-info
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:30:21.900: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb 18 00:30:26.029: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:30:26.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-2609" for this suite.
Feb 18 00:30:32.066: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:30:32.158: INFO: namespace container-runtime-2609 deletion completed in 6.104245113s

• [SLOW TEST:10.259 seconds]
[k8s.io] Container Runtime
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:30:32.159: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Feb 18 00:30:32.241: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb 18 00:30:32.247: INFO: Waiting for terminating namespaces to be deleted...
Feb 18 00:30:32.250: INFO: 
Logging pods the kubelet thinks is on node iruya-worker before test
Feb 18 00:30:32.259: INFO: coredns-5d4dd4b4db-69khc from kube-system started at 2021-01-10 17:26:03 +0000 UTC (1 container statuses recorded)
Feb 18 00:30:32.259: INFO: 	Container coredns ready: true, restart count 1
Feb 18 00:30:32.259: INFO: kube-proxy-24ww6 from kube-system started at 2021-01-10 17:25:00 +0000 UTC (1 container statuses recorded)
Feb 18 00:30:32.259: INFO: 	Container kube-proxy ready: true, restart count 1
Feb 18 00:30:32.259: INFO: chaos-controller-manager-6c68f56f79-2j2xr from default started at 2021-01-11 03:53:47 +0000 UTC (1 container statuses recorded)
Feb 18 00:30:32.259: INFO: 	Container chaos-mesh ready: true, restart count 2
Feb 18 00:30:32.259: INFO: local-path-provisioner-7f465859dc-zj67c from local-path-storage started at 2021-01-10 17:26:02 +0000 UTC (1 container statuses recorded)
Feb 18 00:30:32.259: INFO: 	Container local-path-provisioner ready: true, restart count 7
Feb 18 00:30:32.259: INFO: kindnet-vgcd6 from kube-system started at 2021-01-10 17:25:04 +0000 UTC (1 container statuses recorded)
Feb 18 00:30:32.259: INFO: 	Container kindnet-cni ready: true, restart count 1
Feb 18 00:30:32.259: INFO: chaos-daemon-s74sn from default started at 2021-01-11 03:53:47 +0000 UTC (1 container statuses recorded)
Feb 18 00:30:32.259: INFO: 	Container chaos-daemon ready: true, restart count 1
Feb 18 00:30:32.259: INFO: coredns-5d4dd4b4db-b9gp2 from kube-system started at 2021-01-10 17:25:57 +0000 UTC (1 container statuses recorded)
Feb 18 00:30:32.259: INFO: 	Container coredns ready: true, restart count 1
Feb 18 00:30:32.259: INFO: 
Logging pods the kubelet thinks is on node iruya-worker2 before test
Feb 18 00:30:32.264: INFO: chaos-daemon-7gq5t from default started at 2021-01-11 03:53:47 +0000 UTC (1 container statuses recorded)
Feb 18 00:30:32.264: INFO: 	Container chaos-daemon ready: true, restart count 1
Feb 18 00:30:32.264: INFO: kindnet-gbtx5 from kube-system started at 2021-01-10 17:25:04 +0000 UTC (1 container statuses recorded)
Feb 18 00:30:32.264: INFO: 	Container kindnet-cni ready: true, restart count 2
Feb 18 00:30:32.265: INFO: kube-proxy-h6zb5 from kube-system started at 2021-01-10 17:25:00 +0000 UTC (1 container statuses recorded)
Feb 18 00:30:32.265: INFO: 	Container kube-proxy ready: true, restart count 1
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.1664b0235405d1b8], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:30:33.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-8921" for this suite.
Feb 18 00:30:39.312: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:30:39.488: INFO: namespace sched-pred-8921 deletion completed in 6.196334639s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:7.330 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if not matching  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:30:39.489: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Feb 18 00:30:49.949: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 18 00:30:49.961: INFO: Pod pod-with-poststart-http-hook still exists
Feb 18 00:30:51.961: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 18 00:30:51.965: INFO: Pod pod-with-poststart-http-hook still exists
Feb 18 00:30:53.961: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 18 00:30:53.972: INFO: Pod pod-with-poststart-http-hook still exists
Feb 18 00:30:55.961: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 18 00:30:56.141: INFO: Pod pod-with-poststart-http-hook still exists
Feb 18 00:30:57.961: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 18 00:30:57.965: INFO: Pod pod-with-poststart-http-hook still exists
Feb 18 00:30:59.961: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 18 00:30:59.965: INFO: Pod pod-with-poststart-http-hook still exists
Feb 18 00:31:01.961: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 18 00:31:01.965: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:31:01.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-9476" for this suite.
Feb 18 00:31:24.012: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:31:24.126: INFO: namespace container-lifecycle-hook-9476 deletion completed in 22.156067578s

• [SLOW TEST:44.636 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:31:24.126: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 18 00:31:24.222: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bbe28af4-abdd-4c90-aab6-f117d9cd9ff7" in namespace "projected-7119" to be "success or failure"
Feb 18 00:31:24.225: INFO: Pod "downwardapi-volume-bbe28af4-abdd-4c90-aab6-f117d9cd9ff7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.240489ms
Feb 18 00:31:26.234: INFO: Pod "downwardapi-volume-bbe28af4-abdd-4c90-aab6-f117d9cd9ff7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011525775s
Feb 18 00:31:28.282: INFO: Pod "downwardapi-volume-bbe28af4-abdd-4c90-aab6-f117d9cd9ff7": Phase="Running", Reason="", readiness=true. Elapsed: 4.060016147s
Feb 18 00:31:30.342: INFO: Pod "downwardapi-volume-bbe28af4-abdd-4c90-aab6-f117d9cd9ff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.119923904s
STEP: Saw pod success
Feb 18 00:31:30.342: INFO: Pod "downwardapi-volume-bbe28af4-abdd-4c90-aab6-f117d9cd9ff7" satisfied condition "success or failure"
Feb 18 00:31:30.345: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-bbe28af4-abdd-4c90-aab6-f117d9cd9ff7 container client-container: 
STEP: delete the pod
Feb 18 00:31:30.404: INFO: Waiting for pod downwardapi-volume-bbe28af4-abdd-4c90-aab6-f117d9cd9ff7 to disappear
Feb 18 00:31:30.441: INFO: Pod downwardapi-volume-bbe28af4-abdd-4c90-aab6-f117d9cd9ff7 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:31:30.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7119" for this suite.
Feb 18 00:31:36.510: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:31:36.600: INFO: namespace projected-7119 deletion completed in 6.125991579s

• [SLOW TEST:12.474 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:31:36.601: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Feb 18 00:31:36.683: INFO: Waiting up to 5m0s for pod "downward-api-5db455a4-fc85-4c60-b02b-69d14a887ff8" in namespace "downward-api-5049" to be "success or failure"
Feb 18 00:31:36.686: INFO: Pod "downward-api-5db455a4-fc85-4c60-b02b-69d14a887ff8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.093635ms
Feb 18 00:31:38.690: INFO: Pod "downward-api-5db455a4-fc85-4c60-b02b-69d14a887ff8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007329995s
Feb 18 00:31:40.749: INFO: Pod "downward-api-5db455a4-fc85-4c60-b02b-69d14a887ff8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.066345958s
STEP: Saw pod success
Feb 18 00:31:40.749: INFO: Pod "downward-api-5db455a4-fc85-4c60-b02b-69d14a887ff8" satisfied condition "success or failure"
Feb 18 00:31:40.753: INFO: Trying to get logs from node iruya-worker pod downward-api-5db455a4-fc85-4c60-b02b-69d14a887ff8 container dapi-container: 
STEP: delete the pod
Feb 18 00:31:40.778: INFO: Waiting for pod downward-api-5db455a4-fc85-4c60-b02b-69d14a887ff8 to disappear
Feb 18 00:31:40.794: INFO: Pod downward-api-5db455a4-fc85-4c60-b02b-69d14a887ff8 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:31:40.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5049" for this suite.
Feb 18 00:31:46.822: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:31:46.908: INFO: namespace downward-api-5049 deletion completed in 6.110444941s

• [SLOW TEST:10.307 seconds]
[sig-node] Downward API
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:31:46.908: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
STEP: reading a file in the container
Feb 18 00:31:53.559: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-831 pod-service-account-de123099-8bc7-4476-9321-829e0150b285 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Feb 18 00:31:53.760: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-831 pod-service-account-de123099-8bc7-4476-9321-829e0150b285 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Feb 18 00:31:53.981: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-831 pod-service-account-de123099-8bc7-4476-9321-829e0150b285 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:31:54.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-831" for this suite.
Feb 18 00:32:00.211: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:32:00.344: INFO: namespace svcaccounts-831 deletion completed in 6.145189263s

• [SLOW TEST:13.436 seconds]
[sig-auth] ServiceAccounts
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period 
  should be submitted and removed [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:32:00.345: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Delete Grace Period
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47
[It] should be submitted and removed [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up selector
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
Feb 18 00:32:06.457: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0'
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Feb 18 00:32:16.560: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:32:16.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1067" for this suite.
Feb 18 00:32:22.595: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:32:22.674: INFO: namespace pods-1067 deletion completed in 6.106703302s

• [SLOW TEST:22.329 seconds]
[k8s.io] [sig-node] Pods Extended
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Delete Grace Period
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:32:22.674: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 18 00:32:22.800: INFO: Waiting up to 5m0s for pod "downwardapi-volume-aa8b9176-b8ce-4822-a26d-ba3029769a8c" in namespace "downward-api-375" to be "success or failure"
Feb 18 00:32:22.820: INFO: Pod "downwardapi-volume-aa8b9176-b8ce-4822-a26d-ba3029769a8c": Phase="Pending", Reason="", readiness=false. Elapsed: 19.140301ms
Feb 18 00:32:24.824: INFO: Pod "downwardapi-volume-aa8b9176-b8ce-4822-a26d-ba3029769a8c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023344308s
Feb 18 00:32:26.845: INFO: Pod "downwardapi-volume-aa8b9176-b8ce-4822-a26d-ba3029769a8c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044578396s
STEP: Saw pod success
Feb 18 00:32:26.845: INFO: Pod "downwardapi-volume-aa8b9176-b8ce-4822-a26d-ba3029769a8c" satisfied condition "success or failure"
Feb 18 00:32:26.848: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-aa8b9176-b8ce-4822-a26d-ba3029769a8c container client-container: 
STEP: delete the pod
Feb 18 00:32:26.882: INFO: Waiting for pod downwardapi-volume-aa8b9176-b8ce-4822-a26d-ba3029769a8c to disappear
Feb 18 00:32:26.915: INFO: Pod downwardapi-volume-aa8b9176-b8ce-4822-a26d-ba3029769a8c no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:32:26.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-375" for this suite.
Feb 18 00:32:33.101: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:32:33.182: INFO: namespace downward-api-375 deletion completed in 6.26283875s

• [SLOW TEST:10.508 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:32:33.183: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb 18 00:32:38.125: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:32:38.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-814" for this suite.
Feb 18 00:32:44.522: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:32:44.648: INFO: namespace container-runtime-814 deletion completed in 6.249613571s

• [SLOW TEST:11.466 seconds]
[k8s.io] Container Runtime
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Events
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:32:44.649: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Feb 18 00:32:48.823: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-753771f9-afa9-4632-b193-cb3311d5ecb3,GenerateName:,Namespace:events-2981,SelfLink:/api/v1/namespaces/events-2981/pods/send-events-753771f9-afa9-4632-b193-cb3311d5ecb3,UID:fa1dc144-37b5-4c10-9713-fba447e1e82c,ResourceVersion:6956114,Generation:0,CreationTimestamp:2021-02-18 00:32:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 701361764,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-v458w {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-v458w,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-v458w true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002eaeca0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002eaecc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 00:32:44 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 00:32:47 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 00:32:47 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 00:32:44 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.7,PodIP:10.244.2.122,StartTime:2021-02-18 00:32:44 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2021-02-18 00:32:47 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://1aa55e70643d33c6ffa19aa75b4d0decb54b53c47fc4e4568813b83ffdac4faa}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Feb 18 00:32:50.827: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Feb 18 00:32:52.831: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:32:52.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-2981" for this suite.
Feb 18 00:33:34.919: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:33:35.041: INFO: namespace events-2981 deletion completed in 42.13443074s

• [SLOW TEST:50.392 seconds]
[k8s.io] [sig-node] Events
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:33:35.041: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557
[It] should create a deployment from an image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 18 00:33:35.127: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-3501'
Feb 18 00:33:35.231: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 18 00:33:35.231: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562
Feb 18 00:33:39.268: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-3501'
Feb 18 00:33:39.376: INFO: stderr: ""
Feb 18 00:33:39.376: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:33:39.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3501" for this suite.
Feb 18 00:34:01.393: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:34:01.484: INFO: namespace kubectl-3501 deletion completed in 22.104303026s

• [SLOW TEST:26.443 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a deployment from an image  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:34:01.485: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Feb 18 00:34:01.608: INFO: Waiting up to 5m0s for pod "downward-api-a7301872-ac13-4280-bbf8-6847e476ba01" in namespace "downward-api-3626" to be "success or failure"
Feb 18 00:34:01.618: INFO: Pod "downward-api-a7301872-ac13-4280-bbf8-6847e476ba01": Phase="Pending", Reason="", readiness=false. Elapsed: 9.833632ms
Feb 18 00:34:03.622: INFO: Pod "downward-api-a7301872-ac13-4280-bbf8-6847e476ba01": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01387585s
Feb 18 00:34:05.627: INFO: Pod "downward-api-a7301872-ac13-4280-bbf8-6847e476ba01": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018576008s
STEP: Saw pod success
Feb 18 00:34:05.627: INFO: Pod "downward-api-a7301872-ac13-4280-bbf8-6847e476ba01" satisfied condition "success or failure"
Feb 18 00:34:05.630: INFO: Trying to get logs from node iruya-worker pod downward-api-a7301872-ac13-4280-bbf8-6847e476ba01 container dapi-container: 
STEP: delete the pod
Feb 18 00:34:05.650: INFO: Waiting for pod downward-api-a7301872-ac13-4280-bbf8-6847e476ba01 to disappear
Feb 18 00:34:05.654: INFO: Pod downward-api-a7301872-ac13-4280-bbf8-6847e476ba01 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:34:05.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3626" for this suite.
Feb 18 00:34:11.700: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:34:11.788: INFO: namespace downward-api-3626 deletion completed in 6.109398122s

• [SLOW TEST:10.304 seconds]
[sig-node] Downward API
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:34:11.789: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on node default medium
Feb 18 00:34:11.862: INFO: Waiting up to 5m0s for pod "pod-27ae7917-94fa-4a25-9b18-4404110a2035" in namespace "emptydir-4611" to be "success or failure"
Feb 18 00:34:11.889: INFO: Pod "pod-27ae7917-94fa-4a25-9b18-4404110a2035": Phase="Pending", Reason="", readiness=false. Elapsed: 26.912387ms
Feb 18 00:34:13.893: INFO: Pod "pod-27ae7917-94fa-4a25-9b18-4404110a2035": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031312841s
Feb 18 00:34:15.898: INFO: Pod "pod-27ae7917-94fa-4a25-9b18-4404110a2035": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035746521s
STEP: Saw pod success
Feb 18 00:34:15.898: INFO: Pod "pod-27ae7917-94fa-4a25-9b18-4404110a2035" satisfied condition "success or failure"
Feb 18 00:34:15.901: INFO: Trying to get logs from node iruya-worker pod pod-27ae7917-94fa-4a25-9b18-4404110a2035 container test-container: 
STEP: delete the pod
Feb 18 00:34:15.944: INFO: Waiting for pod pod-27ae7917-94fa-4a25-9b18-4404110a2035 to disappear
Feb 18 00:34:15.953: INFO: Pod pod-27ae7917-94fa-4a25-9b18-4404110a2035 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:34:15.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4611" for this suite.
Feb 18 00:34:21.969: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:34:22.102: INFO: namespace emptydir-4611 deletion completed in 6.145711822s

• [SLOW TEST:10.313 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:34:22.103: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Feb 18 00:34:22.178: INFO: Waiting up to 5m0s for pod "pod-f5a2911c-fc6b-461a-b5d9-ca1c9d2bacb6" in namespace "emptydir-1092" to be "success or failure"
Feb 18 00:34:22.229: INFO: Pod "pod-f5a2911c-fc6b-461a-b5d9-ca1c9d2bacb6": Phase="Pending", Reason="", readiness=false. Elapsed: 51.192731ms
Feb 18 00:34:24.233: INFO: Pod "pod-f5a2911c-fc6b-461a-b5d9-ca1c9d2bacb6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055161728s
Feb 18 00:34:26.237: INFO: Pod "pod-f5a2911c-fc6b-461a-b5d9-ca1c9d2bacb6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.058970141s
STEP: Saw pod success
Feb 18 00:34:26.237: INFO: Pod "pod-f5a2911c-fc6b-461a-b5d9-ca1c9d2bacb6" satisfied condition "success or failure"
Feb 18 00:34:26.240: INFO: Trying to get logs from node iruya-worker2 pod pod-f5a2911c-fc6b-461a-b5d9-ca1c9d2bacb6 container test-container: 
STEP: delete the pod
Feb 18 00:34:26.291: INFO: Waiting for pod pod-f5a2911c-fc6b-461a-b5d9-ca1c9d2bacb6 to disappear
Feb 18 00:34:26.301: INFO: Pod pod-f5a2911c-fc6b-461a-b5d9-ca1c9d2bacb6 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:34:26.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1092" for this suite.
Feb 18 00:34:32.316: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:34:32.399: INFO: namespace emptydir-1092 deletion completed in 6.095051583s

• [SLOW TEST:10.297 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:34:32.399: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should retry creating failed daemon pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Feb 18 00:34:32.488: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 18 00:34:32.493: INFO: Number of nodes with available pods: 0
Feb 18 00:34:32.493: INFO: Node iruya-worker is running more than one daemon pod
Feb 18 00:34:33.498: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 18 00:34:33.502: INFO: Number of nodes with available pods: 0
Feb 18 00:34:33.502: INFO: Node iruya-worker is running more than one daemon pod
Feb 18 00:34:34.507: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 18 00:34:34.510: INFO: Number of nodes with available pods: 0
Feb 18 00:34:34.510: INFO: Node iruya-worker is running more than one daemon pod
Feb 18 00:34:35.620: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 18 00:34:35.624: INFO: Number of nodes with available pods: 0
Feb 18 00:34:35.624: INFO: Node iruya-worker is running more than one daemon pod
Feb 18 00:34:36.498: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 18 00:34:36.501: INFO: Number of nodes with available pods: 0
Feb 18 00:34:36.501: INFO: Node iruya-worker is running more than one daemon pod
Feb 18 00:34:37.497: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 18 00:34:37.500: INFO: Number of nodes with available pods: 1
Feb 18 00:34:37.500: INFO: Node iruya-worker is running more than one daemon pod
Feb 18 00:34:38.497: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 18 00:34:38.500: INFO: Number of nodes with available pods: 2
Feb 18 00:34:38.500: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Feb 18 00:34:38.535: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 18 00:34:38.554: INFO: Number of nodes with available pods: 2
Feb 18 00:34:38.554: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9890, will wait for the garbage collector to delete the pods
Feb 18 00:34:39.649: INFO: Deleting DaemonSet.extensions daemon-set took: 6.355805ms
Feb 18 00:34:39.950: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.335227ms
Feb 18 00:34:50.853: INFO: Number of nodes with available pods: 0
Feb 18 00:34:50.853: INFO: Number of running nodes: 0, number of available pods: 0
Feb 18 00:34:50.854: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9890/daemonsets","resourceVersion":"6956528"},"items":null}

Feb 18 00:34:50.857: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9890/pods","resourceVersion":"6956528"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:34:50.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-9890" for this suite.
Feb 18 00:34:56.895: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:34:56.987: INFO: namespace daemonsets-9890 deletion completed in 6.117136363s

• [SLOW TEST:24.587 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:34:56.987: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on tmpfs
Feb 18 00:34:57.058: INFO: Waiting up to 5m0s for pod "pod-8388b8a8-d78d-4538-b1e0-edf5c8e1341d" in namespace "emptydir-4102" to be "success or failure"
Feb 18 00:34:57.062: INFO: Pod "pod-8388b8a8-d78d-4538-b1e0-edf5c8e1341d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.917607ms
Feb 18 00:34:59.066: INFO: Pod "pod-8388b8a8-d78d-4538-b1e0-edf5c8e1341d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007890958s
Feb 18 00:35:01.070: INFO: Pod "pod-8388b8a8-d78d-4538-b1e0-edf5c8e1341d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011795315s
Feb 18 00:35:03.074: INFO: Pod "pod-8388b8a8-d78d-4538-b1e0-edf5c8e1341d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.01581677s
STEP: Saw pod success
Feb 18 00:35:03.074: INFO: Pod "pod-8388b8a8-d78d-4538-b1e0-edf5c8e1341d" satisfied condition "success or failure"
Feb 18 00:35:03.077: INFO: Trying to get logs from node iruya-worker pod pod-8388b8a8-d78d-4538-b1e0-edf5c8e1341d container test-container: 
STEP: delete the pod
Feb 18 00:35:03.100: INFO: Waiting for pod pod-8388b8a8-d78d-4538-b1e0-edf5c8e1341d to disappear
Feb 18 00:35:03.134: INFO: Pod pod-8388b8a8-d78d-4538-b1e0-edf5c8e1341d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:35:03.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4102" for this suite.
Feb 18 00:35:09.162: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:35:09.305: INFO: namespace emptydir-4102 deletion completed in 6.168096s

• [SLOW TEST:12.318 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:35:09.305: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Feb 18 00:35:09.341: INFO: Waiting up to 5m0s for pod "pod-2f8ddc54-8122-4fe1-89f7-bf4163db0c6f" in namespace "emptydir-6283" to be "success or failure"
Feb 18 00:35:09.356: INFO: Pod "pod-2f8ddc54-8122-4fe1-89f7-bf4163db0c6f": Phase="Pending", Reason="", readiness=false. Elapsed: 14.859555ms
Feb 18 00:35:11.361: INFO: Pod "pod-2f8ddc54-8122-4fe1-89f7-bf4163db0c6f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019944722s
Feb 18 00:35:13.364: INFO: Pod "pod-2f8ddc54-8122-4fe1-89f7-bf4163db0c6f": Phase="Running", Reason="", readiness=true. Elapsed: 4.023342969s
Feb 18 00:35:15.368: INFO: Pod "pod-2f8ddc54-8122-4fe1-89f7-bf4163db0c6f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.02770452s
STEP: Saw pod success
Feb 18 00:35:15.369: INFO: Pod "pod-2f8ddc54-8122-4fe1-89f7-bf4163db0c6f" satisfied condition "success or failure"
Feb 18 00:35:15.371: INFO: Trying to get logs from node iruya-worker pod pod-2f8ddc54-8122-4fe1-89f7-bf4163db0c6f container test-container: 
STEP: delete the pod
Feb 18 00:35:15.416: INFO: Waiting for pod pod-2f8ddc54-8122-4fe1-89f7-bf4163db0c6f to disappear
Feb 18 00:35:15.434: INFO: Pod pod-2f8ddc54-8122-4fe1-89f7-bf4163db0c6f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:35:15.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6283" for this suite.
Feb 18 00:35:21.509: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:35:21.596: INFO: namespace emptydir-6283 deletion completed in 6.157506375s

• [SLOW TEST:12.290 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:35:21.596: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Feb 18 00:35:28.940: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:35:29.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-5801" for this suite.
Feb 18 00:35:54.003: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:35:54.080: INFO: namespace replicaset-5801 deletion completed in 24.102412687s

• [SLOW TEST:32.484 seconds]
[sig-apps] ReplicaSet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:35:54.080: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 18 00:35:54.162: INFO: Creating ReplicaSet my-hostname-basic-dae7f755-c8e5-4b70-8ed6-995985968062
Feb 18 00:35:54.185: INFO: Pod name my-hostname-basic-dae7f755-c8e5-4b70-8ed6-995985968062: Found 0 pods out of 1
Feb 18 00:35:59.189: INFO: Pod name my-hostname-basic-dae7f755-c8e5-4b70-8ed6-995985968062: Found 1 pods out of 1
Feb 18 00:35:59.189: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-dae7f755-c8e5-4b70-8ed6-995985968062" is running
Feb 18 00:35:59.192: INFO: Pod "my-hostname-basic-dae7f755-c8e5-4b70-8ed6-995985968062-mdg7b" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-02-18 00:35:54 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-02-18 00:35:57 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-02-18 00:35:57 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2021-02-18 00:35:54 +0000 UTC Reason: Message:}])
Feb 18 00:35:59.192: INFO: Trying to dial the pod
Feb 18 00:36:04.204: INFO: Controller my-hostname-basic-dae7f755-c8e5-4b70-8ed6-995985968062: Got expected result from replica 1 [my-hostname-basic-dae7f755-c8e5-4b70-8ed6-995985968062-mdg7b]: "my-hostname-basic-dae7f755-c8e5-4b70-8ed6-995985968062-mdg7b", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:36:04.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-9951" for this suite.
Feb 18 00:36:10.227: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:36:10.303: INFO: namespace replicaset-9951 deletion completed in 6.095218077s

• [SLOW TEST:16.223 seconds]
[sig-apps] ReplicaSet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:36:10.303: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Feb 18 00:36:10.390: INFO: Waiting up to 5m0s for pod "pod-da1cd40a-4c34-4356-8626-e1f9fb6ee41f" in namespace "emptydir-7523" to be "success or failure"
Feb 18 00:36:10.394: INFO: Pod "pod-da1cd40a-4c34-4356-8626-e1f9fb6ee41f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022098ms
Feb 18 00:36:12.398: INFO: Pod "pod-da1cd40a-4c34-4356-8626-e1f9fb6ee41f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00820924s
Feb 18 00:36:14.404: INFO: Pod "pod-da1cd40a-4c34-4356-8626-e1f9fb6ee41f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013672834s
STEP: Saw pod success
Feb 18 00:36:14.404: INFO: Pod "pod-da1cd40a-4c34-4356-8626-e1f9fb6ee41f" satisfied condition "success or failure"
Feb 18 00:36:14.406: INFO: Trying to get logs from node iruya-worker pod pod-da1cd40a-4c34-4356-8626-e1f9fb6ee41f container test-container: 
STEP: delete the pod
Feb 18 00:36:14.429: INFO: Waiting for pod pod-da1cd40a-4c34-4356-8626-e1f9fb6ee41f to disappear
Feb 18 00:36:14.434: INFO: Pod pod-da1cd40a-4c34-4356-8626-e1f9fb6ee41f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:36:14.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7523" for this suite.
Feb 18 00:36:20.451: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:36:20.532: INFO: namespace emptydir-7523 deletion completed in 6.091517277s

• [SLOW TEST:10.228 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:36:20.532: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Feb 18 00:36:20.706: INFO: Waiting up to 5m0s for pod "pod-e491ead2-e5bd-4efe-8545-145e8d52dae5" in namespace "emptydir-1171" to be "success or failure"
Feb 18 00:36:20.741: INFO: Pod "pod-e491ead2-e5bd-4efe-8545-145e8d52dae5": Phase="Pending", Reason="", readiness=false. Elapsed: 34.436942ms
Feb 18 00:36:22.745: INFO: Pod "pod-e491ead2-e5bd-4efe-8545-145e8d52dae5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038321285s
Feb 18 00:36:24.749: INFO: Pod "pod-e491ead2-e5bd-4efe-8545-145e8d52dae5": Phase="Running", Reason="", readiness=true. Elapsed: 4.042593873s
Feb 18 00:36:26.753: INFO: Pod "pod-e491ead2-e5bd-4efe-8545-145e8d52dae5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.047024267s
STEP: Saw pod success
Feb 18 00:36:26.753: INFO: Pod "pod-e491ead2-e5bd-4efe-8545-145e8d52dae5" satisfied condition "success or failure"
Feb 18 00:36:26.757: INFO: Trying to get logs from node iruya-worker pod pod-e491ead2-e5bd-4efe-8545-145e8d52dae5 container test-container: 
STEP: delete the pod
Feb 18 00:36:26.784: INFO: Waiting for pod pod-e491ead2-e5bd-4efe-8545-145e8d52dae5 to disappear
Feb 18 00:36:26.805: INFO: Pod pod-e491ead2-e5bd-4efe-8545-145e8d52dae5 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:36:26.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1171" for this suite.
Feb 18 00:36:32.852: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:36:32.950: INFO: namespace emptydir-1171 deletion completed in 6.140967686s

• [SLOW TEST:12.418 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:36:32.950: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-79c0e352-e7b5-442c-ae44-f20d50bff17e
STEP: Creating a pod to test consume configMaps
Feb 18 00:36:33.723: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a0ea7dc1-6777-4e51-9681-aa1cf8b15b84" in namespace "projected-2557" to be "success or failure"
Feb 18 00:36:33.755: INFO: Pod "pod-projected-configmaps-a0ea7dc1-6777-4e51-9681-aa1cf8b15b84": Phase="Pending", Reason="", readiness=false. Elapsed: 31.449223ms
Feb 18 00:36:35.759: INFO: Pod "pod-projected-configmaps-a0ea7dc1-6777-4e51-9681-aa1cf8b15b84": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035733503s
Feb 18 00:36:37.764: INFO: Pod "pod-projected-configmaps-a0ea7dc1-6777-4e51-9681-aa1cf8b15b84": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040425217s
Feb 18 00:36:39.769: INFO: Pod "pod-projected-configmaps-a0ea7dc1-6777-4e51-9681-aa1cf8b15b84": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.045431068s
STEP: Saw pod success
Feb 18 00:36:39.769: INFO: Pod "pod-projected-configmaps-a0ea7dc1-6777-4e51-9681-aa1cf8b15b84" satisfied condition "success or failure"
Feb 18 00:36:39.772: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-a0ea7dc1-6777-4e51-9681-aa1cf8b15b84 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 18 00:36:39.785: INFO: Waiting for pod pod-projected-configmaps-a0ea7dc1-6777-4e51-9681-aa1cf8b15b84 to disappear
Feb 18 00:36:39.802: INFO: Pod pod-projected-configmaps-a0ea7dc1-6777-4e51-9681-aa1cf8b15b84 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:36:39.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2557" for this suite.
Feb 18 00:36:45.824: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:36:45.916: INFO: namespace projected-2557 deletion completed in 6.111062079s

• [SLOW TEST:12.966 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:36:45.917: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-94288861-ba71-42ce-9755-f3a2af9eb936
STEP: Creating a pod to test consume secrets
Feb 18 00:36:46.041: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a298c0c9-7844-4b61-af87-d6944ee15a6b" in namespace "projected-7544" to be "success or failure"
Feb 18 00:36:46.054: INFO: Pod "pod-projected-secrets-a298c0c9-7844-4b61-af87-d6944ee15a6b": Phase="Pending", Reason="", readiness=false. Elapsed: 13.318118ms
Feb 18 00:36:48.058: INFO: Pod "pod-projected-secrets-a298c0c9-7844-4b61-af87-d6944ee15a6b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01738725s
Feb 18 00:36:50.062: INFO: Pod "pod-projected-secrets-a298c0c9-7844-4b61-af87-d6944ee15a6b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021723849s
STEP: Saw pod success
Feb 18 00:36:50.063: INFO: Pod "pod-projected-secrets-a298c0c9-7844-4b61-af87-d6944ee15a6b" satisfied condition "success or failure"
Feb 18 00:36:50.066: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-a298c0c9-7844-4b61-af87-d6944ee15a6b container projected-secret-volume-test: 
STEP: delete the pod
Feb 18 00:36:50.133: INFO: Waiting for pod pod-projected-secrets-a298c0c9-7844-4b61-af87-d6944ee15a6b to disappear
Feb 18 00:36:50.138: INFO: Pod pod-projected-secrets-a298c0c9-7844-4b61-af87-d6944ee15a6b no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:36:50.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7544" for this suite.
Feb 18 00:36:56.207: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:36:56.297: INFO: namespace projected-7544 deletion completed in 6.15517367s

• [SLOW TEST:10.380 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:36:56.297: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-4133
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb 18 00:36:56.395: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb 18 00:37:24.574: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.243:8080/dial?request=hostName&protocol=http&host=10.244.1.242&port=8080&tries=1'] Namespace:pod-network-test-4133 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 18 00:37:24.575: INFO: >>> kubeConfig: /root/.kube/config
Feb 18 00:37:24.728: INFO: Waiting for endpoints: map[]
Feb 18 00:37:24.731: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.243:8080/dial?request=hostName&protocol=http&host=10.244.2.130&port=8080&tries=1'] Namespace:pod-network-test-4133 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 18 00:37:24.731: INFO: >>> kubeConfig: /root/.kube/config
Feb 18 00:37:24.846: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:37:24.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-4133" for this suite.
Feb 18 00:37:48.862: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:37:48.954: INFO: namespace pod-network-test-4133 deletion completed in 24.10305677s

• [SLOW TEST:52.657 seconds]
[sig-network] Networking
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:37:48.954: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-65726b7f-fd55-4cd3-bc18-466017e9b95a
STEP: Creating a pod to test consume configMaps
Feb 18 00:37:49.059: INFO: Waiting up to 5m0s for pod "pod-configmaps-17af300c-2a59-49fd-9490-66687eace1e7" in namespace "configmap-674" to be "success or failure"
Feb 18 00:37:49.129: INFO: Pod "pod-configmaps-17af300c-2a59-49fd-9490-66687eace1e7": Phase="Pending", Reason="", readiness=false. Elapsed: 69.912401ms
Feb 18 00:37:51.174: INFO: Pod "pod-configmaps-17af300c-2a59-49fd-9490-66687eace1e7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.114317529s
Feb 18 00:37:53.179: INFO: Pod "pod-configmaps-17af300c-2a59-49fd-9490-66687eace1e7": Phase="Running", Reason="", readiness=true. Elapsed: 4.119155161s
Feb 18 00:37:55.183: INFO: Pod "pod-configmaps-17af300c-2a59-49fd-9490-66687eace1e7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.123288397s
STEP: Saw pod success
Feb 18 00:37:55.183: INFO: Pod "pod-configmaps-17af300c-2a59-49fd-9490-66687eace1e7" satisfied condition "success or failure"
Feb 18 00:37:55.186: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-17af300c-2a59-49fd-9490-66687eace1e7 container configmap-volume-test: 
STEP: delete the pod
Feb 18 00:37:55.211: INFO: Waiting for pod pod-configmaps-17af300c-2a59-49fd-9490-66687eace1e7 to disappear
Feb 18 00:37:55.215: INFO: Pod pod-configmaps-17af300c-2a59-49fd-9490-66687eace1e7 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:37:55.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-674" for this suite.
Feb 18 00:38:01.261: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:38:01.379: INFO: namespace configmap-674 deletion completed in 6.160525392s

• [SLOW TEST:12.425 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:38:01.379: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1074.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1074.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 18 00:38:07.535: INFO: DNS probes using dns-1074/dns-test-3018b26b-0cfb-418d-8dde-7c216d44e974 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:38:07.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-1074" for this suite.
Feb 18 00:38:13.656: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:38:13.748: INFO: namespace dns-1074 deletion completed in 6.168485039s

• [SLOW TEST:12.369 seconds]
[sig-network] DNS
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:38:13.749: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-b1ee2f10-6fda-4996-bf88-6247f2997e86
STEP: Creating a pod to test consume secrets
Feb 18 00:38:13.844: INFO: Waiting up to 5m0s for pod "pod-secrets-5ccc6779-1131-42f4-a499-e3434e12d098" in namespace "secrets-9515" to be "success or failure"
Feb 18 00:38:13.873: INFO: Pod "pod-secrets-5ccc6779-1131-42f4-a499-e3434e12d098": Phase="Pending", Reason="", readiness=false. Elapsed: 29.04221ms
Feb 18 00:38:15.944: INFO: Pod "pod-secrets-5ccc6779-1131-42f4-a499-e3434e12d098": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100090921s
Feb 18 00:38:17.948: INFO: Pod "pod-secrets-5ccc6779-1131-42f4-a499-e3434e12d098": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.104536952s
STEP: Saw pod success
Feb 18 00:38:17.949: INFO: Pod "pod-secrets-5ccc6779-1131-42f4-a499-e3434e12d098" satisfied condition "success or failure"
Feb 18 00:38:17.951: INFO: Trying to get logs from node iruya-worker pod pod-secrets-5ccc6779-1131-42f4-a499-e3434e12d098 container secret-env-test: 
STEP: delete the pod
Feb 18 00:38:18.108: INFO: Waiting for pod pod-secrets-5ccc6779-1131-42f4-a499-e3434e12d098 to disappear
Feb 18 00:38:18.132: INFO: Pod pod-secrets-5ccc6779-1131-42f4-a499-e3434e12d098 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:38:18.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9515" for this suite.
Feb 18 00:38:24.181: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:38:24.264: INFO: namespace secrets-9515 deletion completed in 6.128236511s

• [SLOW TEST:10.515 seconds]
[sig-api-machinery] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:38:24.265: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-1540aa67-cb2c-45ca-bcf1-7f27c691fafb
STEP: Creating secret with name s-test-opt-upd-ed26762e-7ebc-4618-8958-6eb473a2445c
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-1540aa67-cb2c-45ca-bcf1-7f27c691fafb
STEP: Updating secret s-test-opt-upd-ed26762e-7ebc-4618-8958-6eb473a2445c
STEP: Creating secret with name s-test-opt-create-7e40d9a6-8ed1-4058-82d3-093f9fa4e12d
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:39:46.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-509" for this suite.
Feb 18 00:40:08.769: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:40:08.856: INFO: namespace projected-509 deletion completed in 22.126814011s

• [SLOW TEST:104.591 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:40:08.857: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop simple daemon [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Feb 18 00:40:08.983: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 18 00:40:08.987: INFO: Number of nodes with available pods: 0
Feb 18 00:40:08.987: INFO: Node iruya-worker is running more than one daemon pod
Feb 18 00:40:10.042: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 18 00:40:10.045: INFO: Number of nodes with available pods: 0
Feb 18 00:40:10.045: INFO: Node iruya-worker is running more than one daemon pod
Feb 18 00:40:10.993: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 18 00:40:10.998: INFO: Number of nodes with available pods: 0
Feb 18 00:40:10.998: INFO: Node iruya-worker is running more than one daemon pod
Feb 18 00:40:12.001: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 18 00:40:12.093: INFO: Number of nodes with available pods: 0
Feb 18 00:40:12.093: INFO: Node iruya-worker is running more than one daemon pod
Feb 18 00:40:12.992: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 18 00:40:12.995: INFO: Number of nodes with available pods: 0
Feb 18 00:40:12.995: INFO: Node iruya-worker is running more than one daemon pod
Feb 18 00:40:13.993: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 18 00:40:13.996: INFO: Number of nodes with available pods: 2
Feb 18 00:40:13.996: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Feb 18 00:40:14.085: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 18 00:40:14.088: INFO: Number of nodes with available pods: 1
Feb 18 00:40:14.088: INFO: Node iruya-worker is running more than one daemon pod
Feb 18 00:40:15.102: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 18 00:40:15.105: INFO: Number of nodes with available pods: 1
Feb 18 00:40:15.105: INFO: Node iruya-worker is running more than one daemon pod
Feb 18 00:40:16.111: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 18 00:40:16.115: INFO: Number of nodes with available pods: 1
Feb 18 00:40:16.115: INFO: Node iruya-worker is running more than one daemon pod
Feb 18 00:40:17.093: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 18 00:40:17.097: INFO: Number of nodes with available pods: 1
Feb 18 00:40:17.097: INFO: Node iruya-worker is running more than one daemon pod
Feb 18 00:40:18.094: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 18 00:40:18.098: INFO: Number of nodes with available pods: 1
Feb 18 00:40:18.098: INFO: Node iruya-worker is running more than one daemon pod
Feb 18 00:40:19.093: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 18 00:40:19.097: INFO: Number of nodes with available pods: 1
Feb 18 00:40:19.097: INFO: Node iruya-worker is running more than one daemon pod
Feb 18 00:40:20.093: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 18 00:40:20.097: INFO: Number of nodes with available pods: 1
Feb 18 00:40:20.097: INFO: Node iruya-worker is running more than one daemon pod
Feb 18 00:40:21.099: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 18 00:40:21.101: INFO: Number of nodes with available pods: 1
Feb 18 00:40:21.102: INFO: Node iruya-worker is running more than one daemon pod
Feb 18 00:40:22.094: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 18 00:40:22.098: INFO: Number of nodes with available pods: 1
Feb 18 00:40:22.098: INFO: Node iruya-worker is running more than one daemon pod
Feb 18 00:40:23.093: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 18 00:40:23.104: INFO: Number of nodes with available pods: 1
Feb 18 00:40:23.105: INFO: Node iruya-worker is running more than one daemon pod
Feb 18 00:40:24.094: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Feb 18 00:40:24.097: INFO: Number of nodes with available pods: 2
Feb 18 00:40:24.098: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-548, will wait for the garbage collector to delete the pods
Feb 18 00:40:24.160: INFO: Deleting DaemonSet.extensions daemon-set took: 6.084079ms
Feb 18 00:40:24.460: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.308692ms
Feb 18 00:40:29.166: INFO: Number of nodes with available pods: 0
Feb 18 00:40:29.166: INFO: Number of running nodes: 0, number of available pods: 0
Feb 18 00:40:29.168: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-548/daemonsets","resourceVersion":"6957682"},"items":null}

Feb 18 00:40:29.170: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-548/pods","resourceVersion":"6957682"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:40:29.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-548" for this suite.
Feb 18 00:40:35.195: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:40:35.282: INFO: namespace daemonsets-548 deletion completed in 6.102433337s

• [SLOW TEST:26.425 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:40:35.284: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-ceb44b6c-68a6-4096-a6a8-cac3e07804ec
STEP: Creating a pod to test consume secrets
Feb 18 00:40:35.409: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b8fa449d-1641-4af6-8289-e74bc15bbc87" in namespace "projected-9156" to be "success or failure"
Feb 18 00:40:35.432: INFO: Pod "pod-projected-secrets-b8fa449d-1641-4af6-8289-e74bc15bbc87": Phase="Pending", Reason="", readiness=false. Elapsed: 22.845026ms
Feb 18 00:40:37.435: INFO: Pod "pod-projected-secrets-b8fa449d-1641-4af6-8289-e74bc15bbc87": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026444658s
Feb 18 00:40:39.440: INFO: Pod "pod-projected-secrets-b8fa449d-1641-4af6-8289-e74bc15bbc87": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030630337s
STEP: Saw pod success
Feb 18 00:40:39.440: INFO: Pod "pod-projected-secrets-b8fa449d-1641-4af6-8289-e74bc15bbc87" satisfied condition "success or failure"
Feb 18 00:40:39.443: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-b8fa449d-1641-4af6-8289-e74bc15bbc87 container projected-secret-volume-test: 
STEP: delete the pod
Feb 18 00:40:39.504: INFO: Waiting for pod pod-projected-secrets-b8fa449d-1641-4af6-8289-e74bc15bbc87 to disappear
Feb 18 00:40:39.515: INFO: Pod pod-projected-secrets-b8fa449d-1641-4af6-8289-e74bc15bbc87 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:40:39.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9156" for this suite.
Feb 18 00:40:45.537: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:40:45.663: INFO: namespace projected-9156 deletion completed in 6.144333505s

• [SLOW TEST:10.379 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:40:45.664: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Feb 18 00:40:45.776: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-6022,SelfLink:/api/v1/namespaces/watch-6022/configmaps/e2e-watch-test-label-changed,UID:1b46ce44-34b3-4551-b570-50d4bcd49bf7,ResourceVersion:6957765,Generation:0,CreationTimestamp:2021-02-18 00:40:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb 18 00:40:45.777: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-6022,SelfLink:/api/v1/namespaces/watch-6022/configmaps/e2e-watch-test-label-changed,UID:1b46ce44-34b3-4551-b570-50d4bcd49bf7,ResourceVersion:6957766,Generation:0,CreationTimestamp:2021-02-18 00:40:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Feb 18 00:40:45.777: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-6022,SelfLink:/api/v1/namespaces/watch-6022/configmaps/e2e-watch-test-label-changed,UID:1b46ce44-34b3-4551-b570-50d4bcd49bf7,ResourceVersion:6957767,Generation:0,CreationTimestamp:2021-02-18 00:40:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Feb 18 00:40:56.100: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-6022,SelfLink:/api/v1/namespaces/watch-6022/configmaps/e2e-watch-test-label-changed,UID:1b46ce44-34b3-4551-b570-50d4bcd49bf7,ResourceVersion:6957788,Generation:0,CreationTimestamp:2021-02-18 00:40:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb 18 00:40:56.101: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-6022,SelfLink:/api/v1/namespaces/watch-6022/configmaps/e2e-watch-test-label-changed,UID:1b46ce44-34b3-4551-b570-50d4bcd49bf7,ResourceVersion:6957789,Generation:0,CreationTimestamp:2021-02-18 00:40:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Feb 18 00:40:56.101: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-6022,SelfLink:/api/v1/namespaces/watch-6022/configmaps/e2e-watch-test-label-changed,UID:1b46ce44-34b3-4551-b570-50d4bcd49bf7,ResourceVersion:6957790,Generation:0,CreationTimestamp:2021-02-18 00:40:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:40:56.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-6022" for this suite.
Feb 18 00:41:02.363: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:41:02.449: INFO: namespace watch-6022 deletion completed in 6.343463592s

• [SLOW TEST:16.785 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:41:02.449: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 18 00:41:02.490: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:41:06.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1835" for this suite.
Feb 18 00:41:52.576: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:41:52.667: INFO: namespace pods-1835 deletion completed in 46.111327464s

• [SLOW TEST:50.217 seconds]
[k8s.io] Pods
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:41:52.667: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Feb 18 00:41:52.735: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb 18 00:41:52.779: INFO: Waiting for terminating namespaces to be deleted...
Feb 18 00:41:52.781: INFO: 
Logging pods the kubelet thinks is on node iruya-worker before test
Feb 18 00:41:52.789: INFO: coredns-5d4dd4b4db-69khc from kube-system started at 2021-01-10 17:26:03 +0000 UTC (1 container statuses recorded)
Feb 18 00:41:52.789: INFO: 	Container coredns ready: true, restart count 1
Feb 18 00:41:52.789: INFO: kube-proxy-24ww6 from kube-system started at 2021-01-10 17:25:00 +0000 UTC (1 container statuses recorded)
Feb 18 00:41:52.789: INFO: 	Container kube-proxy ready: true, restart count 1
Feb 18 00:41:52.789: INFO: chaos-controller-manager-6c68f56f79-2j2xr from default started at 2021-01-11 03:53:47 +0000 UTC (1 container statuses recorded)
Feb 18 00:41:52.789: INFO: 	Container chaos-mesh ready: true, restart count 2
Feb 18 00:41:52.789: INFO: local-path-provisioner-7f465859dc-zj67c from local-path-storage started at 2021-01-10 17:26:02 +0000 UTC (1 container statuses recorded)
Feb 18 00:41:52.789: INFO: 	Container local-path-provisioner ready: true, restart count 7
Feb 18 00:41:52.789: INFO: kindnet-vgcd6 from kube-system started at 2021-01-10 17:25:04 +0000 UTC (1 container statuses recorded)
Feb 18 00:41:52.789: INFO: 	Container kindnet-cni ready: true, restart count 1
Feb 18 00:41:52.789: INFO: chaos-daemon-s74sn from default started at 2021-01-11 03:53:47 +0000 UTC (1 container statuses recorded)
Feb 18 00:41:52.789: INFO: 	Container chaos-daemon ready: true, restart count 1
Feb 18 00:41:52.789: INFO: coredns-5d4dd4b4db-b9gp2 from kube-system started at 2021-01-10 17:25:57 +0000 UTC (1 container statuses recorded)
Feb 18 00:41:52.789: INFO: 	Container coredns ready: true, restart count 1
Feb 18 00:41:52.789: INFO: 
Logging pods the kubelet thinks is on node iruya-worker2 before test
Feb 18 00:41:52.793: INFO: kindnet-gbtx5 from kube-system started at 2021-01-10 17:25:04 +0000 UTC (1 container statuses recorded)
Feb 18 00:41:52.793: INFO: 	Container kindnet-cni ready: true, restart count 2
Feb 18 00:41:52.793: INFO: kube-proxy-h6zb5 from kube-system started at 2021-01-10 17:25:00 +0000 UTC (1 container statuses recorded)
Feb 18 00:41:52.794: INFO: 	Container kube-proxy ready: true, restart count 1
Feb 18 00:41:52.794: INFO: chaos-daemon-7gq5t from default started at 2021-01-11 03:53:47 +0000 UTC (1 container statuses recorded)
Feb 18 00:41:52.794: INFO: 	Container chaos-daemon ready: true, restart count 1
[It] validates that NodeSelector is respected if matching  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-4484bcc9-d6d0-410f-bca6-517c79de2cab 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-4484bcc9-d6d0-410f-bca6-517c79de2cab off the node iruya-worker
STEP: verifying the node doesn't have the label kubernetes.io/e2e-4484bcc9-d6d0-410f-bca6-517c79de2cab
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:42:01.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-9875" for this suite.
Feb 18 00:42:11.087: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:42:11.179: INFO: namespace sched-pred-9875 deletion completed in 10.133300294s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:18.512 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if matching  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:42:11.179: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 18 00:42:11.589: INFO: (0) /api/v1/nodes/iruya-worker/proxy/logs/: 
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod test-webserver-db20fd95-d4d1-463a-8b3f-47dd993ac4f5 in namespace container-probe-8701
Feb 18 00:42:21.917: INFO: Started pod test-webserver-db20fd95-d4d1-463a-8b3f-47dd993ac4f5 in namespace container-probe-8701
STEP: checking the pod's current state and verifying that restartCount is present
Feb 18 00:42:21.920: INFO: Initial restart count of pod test-webserver-db20fd95-d4d1-463a-8b3f-47dd993ac4f5 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:46:22.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8701" for this suite.
Feb 18 00:46:28.846: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:46:28.943: INFO: namespace container-probe-8701 deletion completed in 6.13696652s

• [SLOW TEST:251.170 seconds]
[k8s.io] Probing container
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:46:28.943: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Feb 18 00:46:29.039: INFO: Pod name pod-release: Found 0 pods out of 1
Feb 18 00:46:34.044: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:46:35.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-1709" for this suite.
Feb 18 00:46:41.144: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:46:41.229: INFO: namespace replication-controller-1709 deletion completed in 6.110215054s

• [SLOW TEST:12.286 seconds]
[sig-apps] ReplicationController
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:46:41.230: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Feb 18 00:46:45.958: INFO: Successfully updated pod "pod-update-activedeadlineseconds-5e2aca26-61c3-4a14-a6d5-cb84b3c28cde"
Feb 18 00:46:45.958: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-5e2aca26-61c3-4a14-a6d5-cb84b3c28cde" in namespace "pods-6560" to be "terminated due to deadline exceeded"
Feb 18 00:46:45.961: INFO: Pod "pod-update-activedeadlineseconds-5e2aca26-61c3-4a14-a6d5-cb84b3c28cde": Phase="Running", Reason="", readiness=true. Elapsed: 2.58391ms
Feb 18 00:46:48.295: INFO: Pod "pod-update-activedeadlineseconds-5e2aca26-61c3-4a14-a6d5-cb84b3c28cde": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.336460026s
Feb 18 00:46:48.295: INFO: Pod "pod-update-activedeadlineseconds-5e2aca26-61c3-4a14-a6d5-cb84b3c28cde" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:46:48.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6560" for this suite.
Feb 18 00:46:54.335: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:46:54.430: INFO: namespace pods-6560 deletion completed in 6.114726772s

• [SLOW TEST:13.201 seconds]
[k8s.io] Pods
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:46:54.431: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 18 00:46:54.496: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1b692c5a-fab9-45cc-86b4-5e5c71400682" in namespace "projected-1953" to be "success or failure"
Feb 18 00:46:54.500: INFO: Pod "downwardapi-volume-1b692c5a-fab9-45cc-86b4-5e5c71400682": Phase="Pending", Reason="", readiness=false. Elapsed: 4.173032ms
Feb 18 00:46:56.505: INFO: Pod "downwardapi-volume-1b692c5a-fab9-45cc-86b4-5e5c71400682": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008664819s
Feb 18 00:46:58.509: INFO: Pod "downwardapi-volume-1b692c5a-fab9-45cc-86b4-5e5c71400682": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012603579s
STEP: Saw pod success
Feb 18 00:46:58.509: INFO: Pod "downwardapi-volume-1b692c5a-fab9-45cc-86b4-5e5c71400682" satisfied condition "success or failure"
Feb 18 00:46:58.515: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-1b692c5a-fab9-45cc-86b4-5e5c71400682 container client-container: 
STEP: delete the pod
Feb 18 00:46:58.637: INFO: Waiting for pod downwardapi-volume-1b692c5a-fab9-45cc-86b4-5e5c71400682 to disappear
Feb 18 00:46:58.640: INFO: Pod downwardapi-volume-1b692c5a-fab9-45cc-86b4-5e5c71400682 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:46:58.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1953" for this suite.
Feb 18 00:47:04.655: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:47:04.749: INFO: namespace projected-1953 deletion completed in 6.102679161s

• [SLOW TEST:10.317 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:47:04.749: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179
[It] should be submitted and removed  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:47:04.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5603" for this suite.
Feb 18 00:47:26.978: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:47:27.145: INFO: namespace pods-5603 deletion completed in 22.241661608s

• [SLOW TEST:22.396 seconds]
[k8s.io] [sig-node] Pods Extended
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Pods Set QOS Class
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:47:27.145: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run pod
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685
[It] should create a pod from an image when restart is Never  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 18 00:47:27.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-692'
Feb 18 00:47:30.608: INFO: stderr: ""
Feb 18 00:47:30.608: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690
Feb 18 00:47:30.615: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-692'
Feb 18 00:47:40.832: INFO: stderr: ""
Feb 18 00:47:40.832: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:47:40.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-692" for this suite.
Feb 18 00:47:46.943: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:47:47.033: INFO: namespace kubectl-692 deletion completed in 6.113832173s

• [SLOW TEST:19.889 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run pod
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a pod from an image when restart is Never  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:47:47.034: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Feb 18 00:47:47.124: INFO: Waiting up to 5m0s for pod "pod-b914dda3-2494-4cc6-b0eb-378210b57025" in namespace "emptydir-5441" to be "success or failure"
Feb 18 00:47:47.127: INFO: Pod "pod-b914dda3-2494-4cc6-b0eb-378210b57025": Phase="Pending", Reason="", readiness=false. Elapsed: 2.760821ms
Feb 18 00:47:49.131: INFO: Pod "pod-b914dda3-2494-4cc6-b0eb-378210b57025": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006784781s
Feb 18 00:47:51.135: INFO: Pod "pod-b914dda3-2494-4cc6-b0eb-378210b57025": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010479801s
STEP: Saw pod success
Feb 18 00:47:51.135: INFO: Pod "pod-b914dda3-2494-4cc6-b0eb-378210b57025" satisfied condition "success or failure"
Feb 18 00:47:51.137: INFO: Trying to get logs from node iruya-worker2 pod pod-b914dda3-2494-4cc6-b0eb-378210b57025 container test-container: 
STEP: delete the pod
Feb 18 00:47:51.180: INFO: Waiting for pod pod-b914dda3-2494-4cc6-b0eb-378210b57025 to disappear
Feb 18 00:47:51.203: INFO: Pod pod-b914dda3-2494-4cc6-b0eb-378210b57025 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:47:51.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5441" for this suite.
Feb 18 00:47:57.219: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:47:57.311: INFO: namespace emptydir-5441 deletion completed in 6.104052599s

• [SLOW TEST:10.277 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:47:57.311: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 18 00:47:57.400: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f8bac7c7-a245-4539-9423-1989776f0f7f" in namespace "downward-api-5896" to be "success or failure"
Feb 18 00:47:57.442: INFO: Pod "downwardapi-volume-f8bac7c7-a245-4539-9423-1989776f0f7f": Phase="Pending", Reason="", readiness=false. Elapsed: 41.969408ms
Feb 18 00:47:59.530: INFO: Pod "downwardapi-volume-f8bac7c7-a245-4539-9423-1989776f0f7f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.129873559s
Feb 18 00:48:01.830: INFO: Pod "downwardapi-volume-f8bac7c7-a245-4539-9423-1989776f0f7f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.429481139s
STEP: Saw pod success
Feb 18 00:48:01.830: INFO: Pod "downwardapi-volume-f8bac7c7-a245-4539-9423-1989776f0f7f" satisfied condition "success or failure"
Feb 18 00:48:01.832: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-f8bac7c7-a245-4539-9423-1989776f0f7f container client-container: 
STEP: delete the pod
Feb 18 00:48:01.899: INFO: Waiting for pod downwardapi-volume-f8bac7c7-a245-4539-9423-1989776f0f7f to disappear
Feb 18 00:48:01.977: INFO: Pod downwardapi-volume-f8bac7c7-a245-4539-9423-1989776f0f7f no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:48:01.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5896" for this suite.
Feb 18 00:48:08.039: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:48:08.121: INFO: namespace downward-api-5896 deletion completed in 6.140488358s

• [SLOW TEST:10.810 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:48:08.121: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Feb 18 00:48:08.817: INFO: Waiting up to 5m0s for pod "downward-api-b8465a3f-b1e3-40fa-833c-f51b1ecf1f9f" in namespace "downward-api-2622" to be "success or failure"
Feb 18 00:48:08.949: INFO: Pod "downward-api-b8465a3f-b1e3-40fa-833c-f51b1ecf1f9f": Phase="Pending", Reason="", readiness=false. Elapsed: 131.96274ms
Feb 18 00:48:11.242: INFO: Pod "downward-api-b8465a3f-b1e3-40fa-833c-f51b1ecf1f9f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.424466972s
Feb 18 00:48:13.246: INFO: Pod "downward-api-b8465a3f-b1e3-40fa-833c-f51b1ecf1f9f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.428587369s
Feb 18 00:48:15.250: INFO: Pod "downward-api-b8465a3f-b1e3-40fa-833c-f51b1ecf1f9f": Phase="Running", Reason="", readiness=true. Elapsed: 6.432870124s
Feb 18 00:48:17.255: INFO: Pod "downward-api-b8465a3f-b1e3-40fa-833c-f51b1ecf1f9f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.437426773s
STEP: Saw pod success
Feb 18 00:48:17.255: INFO: Pod "downward-api-b8465a3f-b1e3-40fa-833c-f51b1ecf1f9f" satisfied condition "success or failure"
Feb 18 00:48:17.258: INFO: Trying to get logs from node iruya-worker2 pod downward-api-b8465a3f-b1e3-40fa-833c-f51b1ecf1f9f container dapi-container: 
STEP: delete the pod
Feb 18 00:48:17.327: INFO: Waiting for pod downward-api-b8465a3f-b1e3-40fa-833c-f51b1ecf1f9f to disappear
Feb 18 00:48:17.335: INFO: Pod downward-api-b8465a3f-b1e3-40fa-833c-f51b1ecf1f9f no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:48:17.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2622" for this suite.
Feb 18 00:48:25.416: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:48:25.512: INFO: namespace downward-api-2622 deletion completed in 8.174397937s

• [SLOW TEST:17.391 seconds]
[sig-node] Downward API
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:48:25.512: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
Feb 18 00:48:26.860: INFO: created pod pod-service-account-defaultsa
Feb 18 00:48:26.860: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Feb 18 00:48:26.953: INFO: created pod pod-service-account-mountsa
Feb 18 00:48:26.953: INFO: pod pod-service-account-mountsa service account token volume mount: true
Feb 18 00:48:27.106: INFO: created pod pod-service-account-nomountsa
Feb 18 00:48:27.106: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Feb 18 00:48:27.463: INFO: created pod pod-service-account-defaultsa-mountspec
Feb 18 00:48:27.463: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Feb 18 00:48:27.769: INFO: created pod pod-service-account-mountsa-mountspec
Feb 18 00:48:27.769: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Feb 18 00:48:27.979: INFO: created pod pod-service-account-nomountsa-mountspec
Feb 18 00:48:27.979: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Feb 18 00:48:27.982: INFO: created pod pod-service-account-defaultsa-nomountspec
Feb 18 00:48:27.982: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Feb 18 00:48:28.569: INFO: created pod pod-service-account-mountsa-nomountspec
Feb 18 00:48:28.569: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Feb 18 00:48:28.936: INFO: created pod pod-service-account-nomountsa-nomountspec
Feb 18 00:48:28.936: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:48:28.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-5790" for this suite.
Feb 18 00:49:00.320: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:49:00.407: INFO: namespace svcaccounts-5790 deletion completed in 30.677401285s

• [SLOW TEST:34.895 seconds]
[sig-auth] ServiceAccounts
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should allow opting out of API token automount  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:49:00.407: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-7599
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Feb 18 00:49:00.606: INFO: Found 0 stateful pods, waiting for 3
Feb 18 00:49:10.611: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 18 00:49:10.611: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 18 00:49:10.611: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb 18 00:49:20.611: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 18 00:49:20.611: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 18 00:49:20.611: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Feb 18 00:49:20.621: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7599 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 18 00:49:20.893: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Feb 18 00:49:20.893: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 18 00:49:20.893: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Feb 18 00:49:30.946: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Feb 18 00:49:41.025: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7599 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 18 00:49:41.236: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
Feb 18 00:49:41.236: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 18 00:49:41.236: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 18 00:49:51.254: INFO: Waiting for StatefulSet statefulset-7599/ss2 to complete update
Feb 18 00:49:51.254: INFO: Waiting for Pod statefulset-7599/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 18 00:49:51.254: INFO: Waiting for Pod statefulset-7599/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 18 00:49:51.254: INFO: Waiting for Pod statefulset-7599/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 18 00:50:01.262: INFO: Waiting for StatefulSet statefulset-7599/ss2 to complete update
Feb 18 00:50:01.262: INFO: Waiting for Pod statefulset-7599/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb 18 00:50:13.621: INFO: Waiting for StatefulSet statefulset-7599/ss2 to complete update
STEP: Rolling back to a previous revision
Feb 18 00:50:21.262: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7599 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb 18 00:50:21.539: INFO: stderr: "+ mv -v /usr/share/nginx/html/index.html /tmp/\n"
Feb 18 00:50:21.539: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb 18 00:50:21.539: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb 18 00:50:31.571: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Feb 18 00:50:41.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7599 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb 18 00:50:42.163: INFO: stderr: "+ mv -v /tmp/index.html /usr/share/nginx/html/\n"
Feb 18 00:50:42.163: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb 18 00:50:42.163: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb 18 00:50:52.184: INFO: Waiting for StatefulSet statefulset-7599/ss2 to complete update
Feb 18 00:50:52.184: INFO: Waiting for Pod statefulset-7599/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 18 00:50:52.184: INFO: Waiting for Pod statefulset-7599/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 18 00:50:52.184: INFO: Waiting for Pod statefulset-7599/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 18 00:51:02.191: INFO: Waiting for StatefulSet statefulset-7599/ss2 to complete update
Feb 18 00:51:02.191: INFO: Waiting for Pod statefulset-7599/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb 18 00:51:12.190: INFO: Waiting for StatefulSet statefulset-7599/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Feb 18 00:51:22.194: INFO: Deleting all statefulset in ns statefulset-7599
Feb 18 00:51:22.196: INFO: Scaling statefulset ss2 to 0
Feb 18 00:51:52.217: INFO: Waiting for statefulset status.replicas updated to 0
Feb 18 00:51:52.220: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:51:52.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-7599" for this suite.
Feb 18 00:52:00.332: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:52:00.433: INFO: namespace statefulset-7599 deletion completed in 8.161097677s

• [SLOW TEST:180.026 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform rolling updates and roll backs of template modifications [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:52:00.434: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Feb 18 00:52:00.542: INFO: Waiting up to 5m0s for pod "pod-026835c7-656c-4189-9bc7-86667f350b45" in namespace "emptydir-9594" to be "success or failure"
Feb 18 00:52:00.550: INFO: Pod "pod-026835c7-656c-4189-9bc7-86667f350b45": Phase="Pending", Reason="", readiness=false. Elapsed: 7.672169ms
Feb 18 00:52:02.554: INFO: Pod "pod-026835c7-656c-4189-9bc7-86667f350b45": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011809s
Feb 18 00:52:04.558: INFO: Pod "pod-026835c7-656c-4189-9bc7-86667f350b45": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015917202s
STEP: Saw pod success
Feb 18 00:52:04.558: INFO: Pod "pod-026835c7-656c-4189-9bc7-86667f350b45" satisfied condition "success or failure"
Feb 18 00:52:04.561: INFO: Trying to get logs from node iruya-worker pod pod-026835c7-656c-4189-9bc7-86667f350b45 container test-container: 
STEP: delete the pod
Feb 18 00:52:04.587: INFO: Waiting for pod pod-026835c7-656c-4189-9bc7-86667f350b45 to disappear
Feb 18 00:52:04.604: INFO: Pod pod-026835c7-656c-4189-9bc7-86667f350b45 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:52:04.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9594" for this suite.
Feb 18 00:52:10.653: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:52:10.728: INFO: namespace emptydir-9594 deletion completed in 6.119648774s

• [SLOW TEST:10.294 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:52:10.728: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Feb 18 00:52:10.801: INFO: PodSpec: initContainers in spec.initContainers
Feb 18 00:52:59.695: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-c40a8a44-1270-43c0-87f7-27c5f2505f6c", GenerateName:"", Namespace:"init-container-358", SelfLink:"/api/v1/namespaces/init-container-358/pods/pod-init-c40a8a44-1270-43c0-87f7-27c5f2505f6c", UID:"887348b9-2e25-48c3-a59d-7a264474aaff", ResourceVersion:"6960008", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63749206330, loc:(*time.Location)(0x7edea20)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"801494582"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-4brjv", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001e82100), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-4brjv", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-4brjv", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-4brjv", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc003c720e8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0032fc600), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003c72170)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003c72190)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc003c72198), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc003c7219c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63749206331, loc:(*time.Location)(0x7edea20)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63749206331, loc:(*time.Location)(0x7edea20)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63749206331, loc:(*time.Location)(0x7edea20)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63749206330, loc:(*time.Location)(0x7edea20)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.7", PodIP:"10.244.2.148", StartTime:(*v1.Time)(0xc001de0060), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0008e81c0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0008e8230)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://eaa012189a5e582065d487c0ba082eff011205b6d8f671257eee22637a42fa44"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001de00a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001de0080), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:52:59.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-358" for this suite.
Feb 18 00:53:21.782: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:53:21.878: INFO: namespace init-container-358 deletion completed in 22.142677433s

• [SLOW TEST:71.150 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:53:21.880: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-f43aef3f-28b9-4c9b-a6aa-6ab39127ffd4
STEP: Creating configMap with name cm-test-opt-upd-e6569e2b-734c-4e82-92f0-b5d22a2f61d1
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-f43aef3f-28b9-4c9b-a6aa-6ab39127ffd4
STEP: Updating configmap cm-test-opt-upd-e6569e2b-734c-4e82-92f0-b5d22a2f61d1
STEP: Creating configMap with name cm-test-opt-create-61c7dd33-dc53-4fda-9252-295a4d5a69ff
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:54:58.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3101" for this suite.
Feb 18 00:55:20.506: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:55:20.587: INFO: namespace configmap-3101 deletion completed in 22.095825368s

• [SLOW TEST:118.708 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:55:20.588: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's args
Feb 18 00:55:20.701: INFO: Waiting up to 5m0s for pod "var-expansion-e820e6cd-1f3f-4ac9-90bf-ca6d9f308ef1" in namespace "var-expansion-1031" to be "success or failure"
Feb 18 00:55:20.722: INFO: Pod "var-expansion-e820e6cd-1f3f-4ac9-90bf-ca6d9f308ef1": Phase="Pending", Reason="", readiness=false. Elapsed: 20.988985ms
Feb 18 00:55:22.726: INFO: Pod "var-expansion-e820e6cd-1f3f-4ac9-90bf-ca6d9f308ef1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025328828s
Feb 18 00:55:24.730: INFO: Pod "var-expansion-e820e6cd-1f3f-4ac9-90bf-ca6d9f308ef1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029141093s
STEP: Saw pod success
Feb 18 00:55:24.730: INFO: Pod "var-expansion-e820e6cd-1f3f-4ac9-90bf-ca6d9f308ef1" satisfied condition "success or failure"
Feb 18 00:55:24.732: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-e820e6cd-1f3f-4ac9-90bf-ca6d9f308ef1 container dapi-container: 
STEP: delete the pod
Feb 18 00:55:24.790: INFO: Waiting for pod var-expansion-e820e6cd-1f3f-4ac9-90bf-ca6d9f308ef1 to disappear
Feb 18 00:55:24.802: INFO: Pod var-expansion-e820e6cd-1f3f-4ac9-90bf-ca6d9f308ef1 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:55:24.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-1031" for this suite.
Feb 18 00:55:30.814: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:55:30.902: INFO: namespace var-expansion-1031 deletion completed in 6.097246046s

• [SLOW TEST:10.314 seconds]
[k8s.io] Variable Expansion
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:55:30.902: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support rollover [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 18 00:55:31.086: INFO: Pod name rollover-pod: Found 0 pods out of 1
Feb 18 00:55:36.091: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb 18 00:55:36.091: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Feb 18 00:55:38.095: INFO: Creating deployment "test-rollover-deployment"
Feb 18 00:55:38.122: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Feb 18 00:55:40.129: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Feb 18 00:55:40.136: INFO: Ensure that both replica sets have 1 created replica
Feb 18 00:55:40.142: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Feb 18 00:55:40.148: INFO: Updating deployment test-rollover-deployment
Feb 18 00:55:40.148: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Feb 18 00:55:42.198: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Feb 18 00:55:42.205: INFO: Make sure deployment "test-rollover-deployment" is complete
Feb 18 00:55:42.211: INFO: all replica sets need to contain the pod-template-hash label
Feb 18 00:55:42.211: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63749206538, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63749206538, loc:(*time.Location)(0x7edea20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63749206540, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63749206538, loc:(*time.Location)(0x7edea20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 00:55:44.220: INFO: all replica sets need to contain the pod-template-hash label
Feb 18 00:55:44.220: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63749206538, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63749206538, loc:(*time.Location)(0x7edea20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63749206544, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63749206538, loc:(*time.Location)(0x7edea20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 00:55:46.219: INFO: all replica sets need to contain the pod-template-hash label
Feb 18 00:55:46.219: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63749206538, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63749206538, loc:(*time.Location)(0x7edea20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63749206544, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63749206538, loc:(*time.Location)(0x7edea20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 00:55:48.220: INFO: all replica sets need to contain the pod-template-hash label
Feb 18 00:55:48.220: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63749206538, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63749206538, loc:(*time.Location)(0x7edea20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63749206544, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63749206538, loc:(*time.Location)(0x7edea20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 00:55:50.220: INFO: all replica sets need to contain the pod-template-hash label
Feb 18 00:55:50.220: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63749206538, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63749206538, loc:(*time.Location)(0x7edea20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63749206544, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63749206538, loc:(*time.Location)(0x7edea20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 00:55:52.219: INFO: all replica sets need to contain the pod-template-hash label
Feb 18 00:55:52.219: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63749206538, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63749206538, loc:(*time.Location)(0x7edea20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63749206544, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63749206538, loc:(*time.Location)(0x7edea20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 00:55:54.227: INFO: 
Feb 18 00:55:54.227: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Feb 18 00:55:54.235: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-4856,SelfLink:/apis/apps/v1/namespaces/deployment-4856/deployments/test-rollover-deployment,UID:d8416bfe-3971-45e2-86c6-cb16c0d1fb49,ResourceVersion:6960504,Generation:2,CreationTimestamp:2021-02-18 00:55:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2021-02-18 00:55:38 +0000 UTC 2021-02-18 00:55:38 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2021-02-18 00:55:54 +0000 UTC 2021-02-18 00:55:38 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Feb 18 00:55:54.238: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-4856,SelfLink:/apis/apps/v1/namespaces/deployment-4856/replicasets/test-rollover-deployment-854595fc44,UID:6e18c493-a265-479d-903b-05f22f1bf104,ResourceVersion:6960493,Generation:2,CreationTimestamp:2021-02-18 00:55:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment d8416bfe-3971-45e2-86c6-cb16c0d1fb49 0xc003213fd7 0xc003213fd8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Feb 18 00:55:54.238: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Feb 18 00:55:54.238: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-4856,SelfLink:/apis/apps/v1/namespaces/deployment-4856/replicasets/test-rollover-controller,UID:88d8b40d-f864-46f3-8a7a-c1d749909f3c,ResourceVersion:6960502,Generation:2,CreationTimestamp:2021-02-18 00:55:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment d8416bfe-3971-45e2-86c6-cb16c0d1fb49 0xc003213f07 0xc003213f08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb 18 00:55:54.239: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-4856,SelfLink:/apis/apps/v1/namespaces/deployment-4856/replicasets/test-rollover-deployment-9b8b997cf,UID:491d4ba0-94f3-448e-9522-32908cec4c3d,ResourceVersion:6960460,Generation:2,CreationTimestamp:2021-02-18 00:55:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment d8416bfe-3971-45e2-86c6-cb16c0d1fb49 0xc002dce0a0 0xc002dce0a1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb 18 00:55:54.242: INFO: Pod "test-rollover-deployment-854595fc44-shcq2" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-shcq2,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-4856,SelfLink:/api/v1/namespaces/deployment-4856/pods/test-rollover-deployment-854595fc44-shcq2,UID:0c508a91-4a0d-484a-a24b-2e7a2100d734,ResourceVersion:6960471,Generation:0,CreationTimestamp:2021-02-18 00:55:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 6e18c493-a265-479d-903b-05f22f1bf104 0xc00297f047 0xc00297f048}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wcjwj {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wcjwj,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-wcjwj true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00297f0c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00297f0e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 00:55:40 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 00:55:43 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 00:55:43 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 00:55:40 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.7,PodIP:10.244.2.151,StartTime:2021-02-18 00:55:40 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2021-02-18 00:55:43 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://c1ad67f940f45dfc646a67d8b5975c141978c3b12875fb128c766bcd2a540969}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:55:54.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-4856" for this suite.
Feb 18 00:56:02.292: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:56:02.387: INFO: namespace deployment-4856 deletion completed in 8.142257842s

• [SLOW TEST:31.485 seconds]
[sig-apps] Deployment
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:56:02.387: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-cd54d3ee-98ac-4a4d-ad08-ffd317fd9c68
STEP: Creating a pod to test consume configMaps
Feb 18 00:56:02.478: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-71d59c11-87c0-4bc7-add3-c790b43ea581" in namespace "projected-5078" to be "success or failure"
Feb 18 00:56:02.487: INFO: Pod "pod-projected-configmaps-71d59c11-87c0-4bc7-add3-c790b43ea581": Phase="Pending", Reason="", readiness=false. Elapsed: 8.962884ms
Feb 18 00:56:04.490: INFO: Pod "pod-projected-configmaps-71d59c11-87c0-4bc7-add3-c790b43ea581": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012123697s
Feb 18 00:56:06.493: INFO: Pod "pod-projected-configmaps-71d59c11-87c0-4bc7-add3-c790b43ea581": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01553978s
STEP: Saw pod success
Feb 18 00:56:06.493: INFO: Pod "pod-projected-configmaps-71d59c11-87c0-4bc7-add3-c790b43ea581" satisfied condition "success or failure"
Feb 18 00:56:06.496: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-71d59c11-87c0-4bc7-add3-c790b43ea581 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 18 00:56:06.512: INFO: Waiting for pod pod-projected-configmaps-71d59c11-87c0-4bc7-add3-c790b43ea581 to disappear
Feb 18 00:56:06.517: INFO: Pod pod-projected-configmaps-71d59c11-87c0-4bc7-add3-c790b43ea581 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:56:06.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5078" for this suite.
Feb 18 00:56:12.548: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:56:12.670: INFO: namespace projected-5078 deletion completed in 6.150093049s

• [SLOW TEST:10.282 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:56:12.671: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run default
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420
[It] should create an rc or deployment from an image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb 18 00:56:12.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-979'
Feb 18 00:56:12.837: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 18 00:56:12.837: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426
Feb 18 00:56:14.928: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-979'
Feb 18 00:56:15.039: INFO: stderr: ""
Feb 18 00:56:15.039: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:56:15.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-979" for this suite.
Feb 18 00:58:17.243: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:58:17.333: INFO: namespace kubectl-979 deletion completed in 2m2.289729186s

• [SLOW TEST:124.662 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run default
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc or deployment from an image  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:58:17.333: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-3808/configmap-test-22a07513-0963-4c84-bbcf-4ee2a98d90db
STEP: Creating a pod to test consume configMaps
Feb 18 00:58:17.428: INFO: Waiting up to 5m0s for pod "pod-configmaps-07c9cb7f-580b-42d4-bd88-b53a732eebea" in namespace "configmap-3808" to be "success or failure"
Feb 18 00:58:17.439: INFO: Pod "pod-configmaps-07c9cb7f-580b-42d4-bd88-b53a732eebea": Phase="Pending", Reason="", readiness=false. Elapsed: 11.089116ms
Feb 18 00:58:19.443: INFO: Pod "pod-configmaps-07c9cb7f-580b-42d4-bd88-b53a732eebea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015358599s
Feb 18 00:58:21.447: INFO: Pod "pod-configmaps-07c9cb7f-580b-42d4-bd88-b53a732eebea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019413347s
STEP: Saw pod success
Feb 18 00:58:21.447: INFO: Pod "pod-configmaps-07c9cb7f-580b-42d4-bd88-b53a732eebea" satisfied condition "success or failure"
Feb 18 00:58:21.451: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-07c9cb7f-580b-42d4-bd88-b53a732eebea container env-test: 
STEP: delete the pod
Feb 18 00:58:21.475: INFO: Waiting for pod pod-configmaps-07c9cb7f-580b-42d4-bd88-b53a732eebea to disappear
Feb 18 00:58:21.479: INFO: Pod pod-configmaps-07c9cb7f-580b-42d4-bd88-b53a732eebea no longer exists
[AfterEach] [sig-node] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:58:21.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3808" for this suite.
Feb 18 00:58:27.498: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:58:27.582: INFO: namespace configmap-3808 deletion completed in 6.100051933s

• [SLOW TEST:10.249 seconds]
[sig-node] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Service endpoints latency
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:58:27.582: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating replication controller svc-latency-rc in namespace svc-latency-8296
I0218 00:58:27.642035       6 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-8296, replica count: 1
I0218 00:58:28.692480       6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0218 00:58:29.692729       6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0218 00:58:30.692998       6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0218 00:58:31.693250       6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb 18 00:58:31.822: INFO: Created: latency-svc-hpqlk
Feb 18 00:58:31.849: INFO: Got endpoints: latency-svc-hpqlk [55.448441ms]
Feb 18 00:58:31.887: INFO: Created: latency-svc-xndms
Feb 18 00:58:31.900: INFO: Got endpoints: latency-svc-xndms [51.764997ms]
Feb 18 00:58:31.917: INFO: Created: latency-svc-8p59k
Feb 18 00:58:31.930: INFO: Got endpoints: latency-svc-8p59k [80.66196ms]
Feb 18 00:58:31.993: INFO: Created: latency-svc-2tjq9
Feb 18 00:58:32.021: INFO: Got endpoints: latency-svc-2tjq9 [171.774083ms]
Feb 18 00:58:32.022: INFO: Created: latency-svc-l4jlc
Feb 18 00:58:32.050: INFO: Got endpoints: latency-svc-l4jlc [201.571872ms]
Feb 18 00:58:32.081: INFO: Created: latency-svc-hpztk
Feb 18 00:58:32.090: INFO: Got endpoints: latency-svc-hpztk [241.271191ms]
Feb 18 00:58:32.137: INFO: Created: latency-svc-kzzlt
Feb 18 00:58:32.156: INFO: Got endpoints: latency-svc-kzzlt [307.683701ms]
Feb 18 00:58:32.157: INFO: Created: latency-svc-8vhqv
Feb 18 00:58:32.174: INFO: Got endpoints: latency-svc-8vhqv [324.931185ms]
Feb 18 00:58:32.193: INFO: Created: latency-svc-nqw8b
Feb 18 00:58:32.210: INFO: Got endpoints: latency-svc-nqw8b [360.762702ms]
Feb 18 00:58:32.229: INFO: Created: latency-svc-vwdzp
Feb 18 00:58:32.274: INFO: Got endpoints: latency-svc-vwdzp [424.89949ms]
Feb 18 00:58:32.283: INFO: Created: latency-svc-mb22q
Feb 18 00:58:32.300: INFO: Got endpoints: latency-svc-mb22q [451.106024ms]
Feb 18 00:58:32.338: INFO: Created: latency-svc-l8vxz
Feb 18 00:58:32.354: INFO: Got endpoints: latency-svc-l8vxz [504.830362ms]
Feb 18 00:58:32.369: INFO: Created: latency-svc-78bcv
Feb 18 00:58:32.429: INFO: Got endpoints: latency-svc-78bcv [580.285632ms]
Feb 18 00:58:32.445: INFO: Created: latency-svc-rlwxx
Feb 18 00:58:32.463: INFO: Got endpoints: latency-svc-rlwxx [614.089064ms]
Feb 18 00:58:32.525: INFO: Created: latency-svc-z2fhq
Feb 18 00:58:32.627: INFO: Got endpoints: latency-svc-z2fhq [778.272133ms]
Feb 18 00:58:32.663: INFO: Created: latency-svc-djh2p
Feb 18 00:58:32.678: INFO: Got endpoints: latency-svc-djh2p [829.323935ms]
Feb 18 00:58:32.699: INFO: Created: latency-svc-ht2j2
Feb 18 00:58:32.758: INFO: Got endpoints: latency-svc-ht2j2 [857.768899ms]
Feb 18 00:58:32.781: INFO: Created: latency-svc-7jzm9
Feb 18 00:58:32.793: INFO: Got endpoints: latency-svc-7jzm9 [862.92828ms]
Feb 18 00:58:32.817: INFO: Created: latency-svc-xxc7f
Feb 18 00:58:32.828: INFO: Got endpoints: latency-svc-xxc7f [807.504367ms]
Feb 18 00:58:32.847: INFO: Created: latency-svc-rhbqp
Feb 18 00:58:32.897: INFO: Got endpoints: latency-svc-rhbqp [846.014667ms]
Feb 18 00:58:32.922: INFO: Created: latency-svc-nmx4f
Feb 18 00:58:32.935: INFO: Got endpoints: latency-svc-nmx4f [844.44434ms]
Feb 18 00:58:32.951: INFO: Created: latency-svc-bg68t
Feb 18 00:58:32.965: INFO: Got endpoints: latency-svc-bg68t [808.538234ms]
Feb 18 00:58:33.047: INFO: Created: latency-svc-hl558
Feb 18 00:58:33.086: INFO: Got endpoints: latency-svc-hl558 [912.501441ms]
Feb 18 00:58:33.088: INFO: Created: latency-svc-kfktx
Feb 18 00:58:33.103: INFO: Got endpoints: latency-svc-kfktx [892.791984ms]
Feb 18 00:58:33.128: INFO: Created: latency-svc-5tz65
Feb 18 00:58:33.145: INFO: Got endpoints: latency-svc-5tz65 [870.924943ms]
Feb 18 00:58:33.203: INFO: Created: latency-svc-jr6w8
Feb 18 00:58:33.220: INFO: Got endpoints: latency-svc-jr6w8 [919.939773ms]
Feb 18 00:58:33.245: INFO: Created: latency-svc-nrjb2
Feb 18 00:58:33.253: INFO: Got endpoints: latency-svc-nrjb2 [898.900813ms]
Feb 18 00:58:33.274: INFO: Created: latency-svc-ct2zt
Feb 18 00:58:33.290: INFO: Got endpoints: latency-svc-ct2zt [860.870111ms]
Feb 18 00:58:33.340: INFO: Created: latency-svc-v7nfb
Feb 18 00:58:33.344: INFO: Got endpoints: latency-svc-v7nfb [881.315636ms]
Feb 18 00:58:33.381: INFO: Created: latency-svc-6rh9x
Feb 18 00:58:33.391: INFO: Got endpoints: latency-svc-6rh9x [763.98284ms]
Feb 18 00:58:33.405: INFO: Created: latency-svc-r2ntp
Feb 18 00:58:33.427: INFO: Got endpoints: latency-svc-r2ntp [749.120471ms]
Feb 18 00:58:33.478: INFO: Created: latency-svc-rm75s
Feb 18 00:58:33.487: INFO: Got endpoints: latency-svc-rm75s [729.052692ms]
Feb 18 00:58:33.508: INFO: Created: latency-svc-mpbcl
Feb 18 00:58:33.517: INFO: Got endpoints: latency-svc-mpbcl [724.503897ms]
Feb 18 00:58:33.538: INFO: Created: latency-svc-2w8px
Feb 18 00:58:33.548: INFO: Got endpoints: latency-svc-2w8px [719.293488ms]
Feb 18 00:58:33.575: INFO: Created: latency-svc-ltqb9
Feb 18 00:58:33.609: INFO: Got endpoints: latency-svc-ltqb9 [712.177581ms]
Feb 18 00:58:33.633: INFO: Created: latency-svc-dbjwn
Feb 18 00:58:33.648: INFO: Got endpoints: latency-svc-dbjwn [713.455713ms]
Feb 18 00:58:33.668: INFO: Created: latency-svc-x5sbj
Feb 18 00:58:33.684: INFO: Got endpoints: latency-svc-x5sbj [718.858446ms]
Feb 18 00:58:33.705: INFO: Created: latency-svc-2q8vl
Feb 18 00:58:33.744: INFO: Got endpoints: latency-svc-2q8vl [657.43651ms]
Feb 18 00:58:33.764: INFO: Created: latency-svc-fdcbf
Feb 18 00:58:33.780: INFO: Got endpoints: latency-svc-fdcbf [677.164019ms]
Feb 18 00:58:33.820: INFO: Created: latency-svc-87vsr
Feb 18 00:58:33.866: INFO: Got endpoints: latency-svc-87vsr [721.309273ms]
Feb 18 00:58:33.892: INFO: Created: latency-svc-cx57t
Feb 18 00:58:33.906: INFO: Got endpoints: latency-svc-cx57t [685.529737ms]
Feb 18 00:58:33.929: INFO: Created: latency-svc-wfdhj
Feb 18 00:58:33.949: INFO: Got endpoints: latency-svc-wfdhj [696.033861ms]
Feb 18 00:58:33.999: INFO: Created: latency-svc-jd9z2
Feb 18 00:58:34.009: INFO: Got endpoints: latency-svc-jd9z2 [718.694635ms]
Feb 18 00:58:34.028: INFO: Created: latency-svc-x74bb
Feb 18 00:58:34.039: INFO: Got endpoints: latency-svc-x74bb [694.317973ms]
Feb 18 00:58:34.052: INFO: Created: latency-svc-vsl6r
Feb 18 00:58:34.063: INFO: Got endpoints: latency-svc-vsl6r [671.440174ms]
Feb 18 00:58:34.076: INFO: Created: latency-svc-qzcbz
Feb 18 00:58:34.086: INFO: Got endpoints: latency-svc-qzcbz [658.863677ms]
Feb 18 00:58:34.130: INFO: Created: latency-svc-t8wqw
Feb 18 00:58:34.141: INFO: Got endpoints: latency-svc-t8wqw [653.091875ms]
Feb 18 00:58:34.161: INFO: Created: latency-svc-t9d7c
Feb 18 00:58:34.177: INFO: Got endpoints: latency-svc-t9d7c [659.509277ms]
Feb 18 00:58:34.210: INFO: Created: latency-svc-mbt2v
Feb 18 00:58:34.224: INFO: Got endpoints: latency-svc-mbt2v [676.409454ms]
Feb 18 00:58:34.250: INFO: Created: latency-svc-8xwff
Feb 18 00:58:34.259: INFO: Got endpoints: latency-svc-8xwff [650.230447ms]
Feb 18 00:58:34.292: INFO: Created: latency-svc-pdp56
Feb 18 00:58:34.307: INFO: Got endpoints: latency-svc-pdp56 [659.164037ms]
Feb 18 00:58:34.329: INFO: Created: latency-svc-w9rw2
Feb 18 00:58:34.343: INFO: Got endpoints: latency-svc-w9rw2 [659.117867ms]
Feb 18 00:58:34.406: INFO: Created: latency-svc-2tmz5
Feb 18 00:58:34.415: INFO: Got endpoints: latency-svc-2tmz5 [671.224994ms]
Feb 18 00:58:34.444: INFO: Created: latency-svc-v8cxn
Feb 18 00:58:34.463: INFO: Got endpoints: latency-svc-v8cxn [683.514047ms]
Feb 18 00:58:34.484: INFO: Created: latency-svc-nv2ct
Feb 18 00:58:34.543: INFO: Got endpoints: latency-svc-nv2ct [676.653924ms]
Feb 18 00:58:34.557: INFO: Created: latency-svc-7rgbq
Feb 18 00:58:34.572: INFO: Got endpoints: latency-svc-7rgbq [666.273ms]
Feb 18 00:58:34.612: INFO: Created: latency-svc-6psbc
Feb 18 00:58:34.627: INFO: Got endpoints: latency-svc-6psbc [677.738116ms]
Feb 18 00:58:34.694: INFO: Created: latency-svc-7dx52
Feb 18 00:58:34.710: INFO: Got endpoints: latency-svc-7dx52 [700.977815ms]
Feb 18 00:58:34.730: INFO: Created: latency-svc-wwg4r
Feb 18 00:58:34.740: INFO: Got endpoints: latency-svc-wwg4r [701.315681ms]
Feb 18 00:58:34.766: INFO: Created: latency-svc-v8cr6
Feb 18 00:58:34.831: INFO: Got endpoints: latency-svc-v8cr6 [767.761241ms]
Feb 18 00:58:34.845: INFO: Created: latency-svc-4sjxx
Feb 18 00:58:34.861: INFO: Got endpoints: latency-svc-4sjxx [774.479455ms]
Feb 18 00:58:34.888: INFO: Created: latency-svc-csl2c
Feb 18 00:58:34.923: INFO: Got endpoints: latency-svc-csl2c [782.317552ms]
Feb 18 00:58:34.986: INFO: Created: latency-svc-vggvw
Feb 18 00:58:35.029: INFO: Got endpoints: latency-svc-vggvw [852.510325ms]
Feb 18 00:58:35.031: INFO: Created: latency-svc-6zs7p
Feb 18 00:58:35.044: INFO: Got endpoints: latency-svc-6zs7p [820.132742ms]
Feb 18 00:58:35.066: INFO: Created: latency-svc-874gd
Feb 18 00:58:35.130: INFO: Got endpoints: latency-svc-874gd [870.786862ms]
Feb 18 00:58:35.158: INFO: Created: latency-svc-s24vz
Feb 18 00:58:35.176: INFO: Got endpoints: latency-svc-s24vz [868.926629ms]
Feb 18 00:58:35.193: INFO: Created: latency-svc-dpxs8
Feb 18 00:58:35.218: INFO: Got endpoints: latency-svc-dpxs8 [875.110851ms]
Feb 18 00:58:35.262: INFO: Created: latency-svc-rhgfl
Feb 18 00:58:35.281: INFO: Got endpoints: latency-svc-rhgfl [865.999657ms]
Feb 18 00:58:35.311: INFO: Created: latency-svc-hfm8t
Feb 18 00:58:35.321: INFO: Got endpoints: latency-svc-hfm8t [857.507303ms]
Feb 18 00:58:35.348: INFO: Created: latency-svc-rtz2r
Feb 18 00:58:35.357: INFO: Got endpoints: latency-svc-rtz2r [814.262333ms]
Feb 18 00:58:35.418: INFO: Created: latency-svc-sz7vr
Feb 18 00:58:35.435: INFO: Got endpoints: latency-svc-sz7vr [863.096223ms]
Feb 18 00:58:35.481: INFO: Created: latency-svc-x9r7h
Feb 18 00:58:35.495: INFO: Got endpoints: latency-svc-x9r7h [868.049785ms]
Feb 18 00:58:35.567: INFO: Created: latency-svc-s4dmp
Feb 18 00:58:35.578: INFO: Got endpoints: latency-svc-s4dmp [868.3965ms]
Feb 18 00:58:35.623: INFO: Created: latency-svc-dj29t
Feb 18 00:58:35.633: INFO: Got endpoints: latency-svc-dj29t [892.526619ms]
Feb 18 00:58:35.654: INFO: Created: latency-svc-g9h4l
Feb 18 00:58:35.663: INFO: Got endpoints: latency-svc-g9h4l [832.122711ms]
Feb 18 00:58:35.705: INFO: Created: latency-svc-sczv5
Feb 18 00:58:35.709: INFO: Got endpoints: latency-svc-sczv5 [848.371123ms]
Feb 18 00:58:35.732: INFO: Created: latency-svc-2hznc
Feb 18 00:58:35.753: INFO: Got endpoints: latency-svc-2hznc [830.022964ms]
Feb 18 00:58:35.805: INFO: Created: latency-svc-tbf2t
Feb 18 00:58:35.893: INFO: Got endpoints: latency-svc-tbf2t [863.819117ms]
Feb 18 00:58:35.893: INFO: Created: latency-svc-5s5zk
Feb 18 00:58:35.919: INFO: Got endpoints: latency-svc-5s5zk [874.958093ms]
Feb 18 00:58:35.999: INFO: Created: latency-svc-nx6x2
Feb 18 00:58:36.021: INFO: Got endpoints: latency-svc-nx6x2 [890.491005ms]
Feb 18 00:58:36.062: INFO: Created: latency-svc-4tlx4
Feb 18 00:58:36.087: INFO: Got endpoints: latency-svc-4tlx4 [910.468514ms]
Feb 18 00:58:36.142: INFO: Created: latency-svc-bh8nq
Feb 18 00:58:36.169: INFO: Got endpoints: latency-svc-bh8nq [950.635872ms]
Feb 18 00:58:36.169: INFO: Created: latency-svc-lwghq
Feb 18 00:58:36.193: INFO: Got endpoints: latency-svc-lwghq [911.155517ms]
Feb 18 00:58:36.210: INFO: Created: latency-svc-gh67s
Feb 18 00:58:36.219: INFO: Got endpoints: latency-svc-gh67s [898.507572ms]
Feb 18 00:58:36.234: INFO: Created: latency-svc-64jlk
Feb 18 00:58:36.310: INFO: Got endpoints: latency-svc-64jlk [952.395631ms]
Feb 18 00:58:36.312: INFO: Created: latency-svc-ps9zp
Feb 18 00:58:36.338: INFO: Got endpoints: latency-svc-ps9zp [902.6997ms]
Feb 18 00:58:36.375: INFO: Created: latency-svc-6jr65
Feb 18 00:58:36.394: INFO: Got endpoints: latency-svc-6jr65 [898.921648ms]
Feb 18 00:58:36.456: INFO: Created: latency-svc-wrccg
Feb 18 00:58:36.492: INFO: Got endpoints: latency-svc-wrccg [913.811404ms]
Feb 18 00:58:36.493: INFO: Created: latency-svc-7868r
Feb 18 00:58:36.516: INFO: Got endpoints: latency-svc-7868r [883.374437ms]
Feb 18 00:58:36.585: INFO: Created: latency-svc-w9m6w
Feb 18 00:58:36.590: INFO: Got endpoints: latency-svc-w9m6w [927.307415ms]
Feb 18 00:58:36.613: INFO: Created: latency-svc-7lcn9
Feb 18 00:58:36.632: INFO: Got endpoints: latency-svc-7lcn9 [922.370816ms]
Feb 18 00:58:36.650: INFO: Created: latency-svc-nfsl4
Feb 18 00:58:36.729: INFO: Got endpoints: latency-svc-nfsl4 [975.765629ms]
Feb 18 00:58:36.752: INFO: Created: latency-svc-jbsr8
Feb 18 00:58:36.770: INFO: Got endpoints: latency-svc-jbsr8 [877.17507ms]
Feb 18 00:58:36.794: INFO: Created: latency-svc-tb2k8
Feb 18 00:58:36.806: INFO: Got endpoints: latency-svc-tb2k8 [886.845374ms]
Feb 18 00:58:36.828: INFO: Created: latency-svc-hcsqs
Feb 18 00:58:36.885: INFO: Got endpoints: latency-svc-hcsqs [864.11311ms]
Feb 18 00:58:36.907: INFO: Created: latency-svc-jjzxx
Feb 18 00:58:36.926: INFO: Got endpoints: latency-svc-jjzxx [839.123985ms]
Feb 18 00:58:36.956: INFO: Created: latency-svc-nzdtz
Feb 18 00:58:37.029: INFO: Got endpoints: latency-svc-nzdtz [859.523604ms]
Feb 18 00:58:37.052: INFO: Created: latency-svc-zs2fp
Feb 18 00:58:37.071: INFO: Got endpoints: latency-svc-zs2fp [878.087941ms]
Feb 18 00:58:37.094: INFO: Created: latency-svc-qmknj
Feb 18 00:58:37.113: INFO: Got endpoints: latency-svc-qmknj [892.946963ms]
Feb 18 00:58:37.166: INFO: Created: latency-svc-dgzf2
Feb 18 00:58:37.178: INFO: Got endpoints: latency-svc-dgzf2 [868.659633ms]
Feb 18 00:58:37.200: INFO: Created: latency-svc-mq8bc
Feb 18 00:58:37.209: INFO: Got endpoints: latency-svc-mq8bc [870.855307ms]
Feb 18 00:58:37.226: INFO: Created: latency-svc-nm6fq
Feb 18 00:58:37.232: INFO: Got endpoints: latency-svc-nm6fq [838.606401ms]
Feb 18 00:58:37.248: INFO: Created: latency-svc-64246
Feb 18 00:58:37.257: INFO: Got endpoints: latency-svc-64246 [764.101136ms]
Feb 18 00:58:37.310: INFO: Created: latency-svc-cfq6v
Feb 18 00:58:37.315: INFO: Got endpoints: latency-svc-cfq6v [799.126572ms]
Feb 18 00:58:37.333: INFO: Created: latency-svc-44lnc
Feb 18 00:58:37.351: INFO: Got endpoints: latency-svc-44lnc [761.295445ms]
Feb 18 00:58:37.370: INFO: Created: latency-svc-jfsnx
Feb 18 00:58:37.406: INFO: Got endpoints: latency-svc-jfsnx [773.77402ms]
Feb 18 00:58:37.454: INFO: Created: latency-svc-rcvhp
Feb 18 00:58:37.459: INFO: Got endpoints: latency-svc-rcvhp [729.997061ms]
Feb 18 00:58:37.482: INFO: Created: latency-svc-fgsth
Feb 18 00:58:37.501: INFO: Got endpoints: latency-svc-fgsth [730.834958ms]
Feb 18 00:58:37.536: INFO: Created: latency-svc-ks48c
Feb 18 00:58:37.585: INFO: Got endpoints: latency-svc-ks48c [778.706189ms]
Feb 18 00:58:37.602: INFO: Created: latency-svc-rd58d
Feb 18 00:58:37.622: INFO: Got endpoints: latency-svc-rd58d [737.413361ms]
Feb 18 00:58:37.638: INFO: Created: latency-svc-plw5z
Feb 18 00:58:37.646: INFO: Got endpoints: latency-svc-plw5z [720.056068ms]
Feb 18 00:58:37.669: INFO: Created: latency-svc-h5ljf
Feb 18 00:58:37.723: INFO: Got endpoints: latency-svc-h5ljf [694.049557ms]
Feb 18 00:58:37.753: INFO: Created: latency-svc-bn9qg
Feb 18 00:58:37.772: INFO: Got endpoints: latency-svc-bn9qg [701.345869ms]
Feb 18 00:58:37.879: INFO: Created: latency-svc-cqj89
Feb 18 00:58:37.886: INFO: Got endpoints: latency-svc-cqj89 [773.344811ms]
Feb 18 00:58:37.939: INFO: Created: latency-svc-czbf4
Feb 18 00:58:37.952: INFO: Got endpoints: latency-svc-czbf4 [773.027711ms]
Feb 18 00:58:37.967: INFO: Created: latency-svc-kxprr
Feb 18 00:58:37.975: INFO: Got endpoints: latency-svc-kxprr [766.53057ms]
Feb 18 00:58:38.047: INFO: Created: latency-svc-6b9dt
Feb 18 00:58:38.071: INFO: Got endpoints: latency-svc-6b9dt [838.318113ms]
Feb 18 00:58:38.072: INFO: Created: latency-svc-swf84
Feb 18 00:58:38.082: INFO: Got endpoints: latency-svc-swf84 [825.690581ms]
Feb 18 00:58:38.113: INFO: Created: latency-svc-snwd4
Feb 18 00:58:38.137: INFO: Got endpoints: latency-svc-snwd4 [821.526499ms]
Feb 18 00:58:38.184: INFO: Created: latency-svc-949lp
Feb 18 00:58:38.190: INFO: Got endpoints: latency-svc-949lp [838.924275ms]
Feb 18 00:58:38.213: INFO: Created: latency-svc-brgwb
Feb 18 00:58:38.233: INFO: Got endpoints: latency-svc-brgwb [826.914432ms]
Feb 18 00:58:38.255: INFO: Created: latency-svc-tqdwf
Feb 18 00:58:38.275: INFO: Got endpoints: latency-svc-tqdwf [815.687105ms]
Feb 18 00:58:38.340: INFO: Created: latency-svc-9smjf
Feb 18 00:58:38.382: INFO: Got endpoints: latency-svc-9smjf [880.914815ms]
Feb 18 00:58:38.384: INFO: Created: latency-svc-9hjnn
Feb 18 00:58:38.407: INFO: Got endpoints: latency-svc-9hjnn [821.800294ms]
Feb 18 00:58:38.431: INFO: Created: latency-svc-hrknt
Feb 18 00:58:38.489: INFO: Got endpoints: latency-svc-hrknt [866.863041ms]
Feb 18 00:58:38.525: INFO: Created: latency-svc-qt6m9
Feb 18 00:58:38.539: INFO: Got endpoints: latency-svc-qt6m9 [892.600078ms]
Feb 18 00:58:38.554: INFO: Created: latency-svc-xtr82
Feb 18 00:58:38.569: INFO: Got endpoints: latency-svc-xtr82 [846.415086ms]
Feb 18 00:58:38.585: INFO: Created: latency-svc-hb5q7
Feb 18 00:58:38.633: INFO: Got endpoints: latency-svc-hb5q7 [860.393443ms]
Feb 18 00:58:38.639: INFO: Created: latency-svc-xqtjx
Feb 18 00:58:38.659: INFO: Got endpoints: latency-svc-xqtjx [772.690768ms]
Feb 18 00:58:38.675: INFO: Created: latency-svc-s9v2p
Feb 18 00:58:38.695: INFO: Got endpoints: latency-svc-s9v2p [743.317614ms]
Feb 18 00:58:38.724: INFO: Created: latency-svc-xkmvq
Feb 18 00:58:38.753: INFO: Got endpoints: latency-svc-xkmvq [777.429768ms]
Feb 18 00:58:38.766: INFO: Created: latency-svc-wlxf8
Feb 18 00:58:38.802: INFO: Got endpoints: latency-svc-wlxf8 [730.868964ms]
Feb 18 00:58:38.826: INFO: Created: latency-svc-2mlfm
Feb 18 00:58:38.837: INFO: Got endpoints: latency-svc-2mlfm [755.057684ms]
Feb 18 00:58:38.879: INFO: Created: latency-svc-9rv96
Feb 18 00:58:38.903: INFO: Got endpoints: latency-svc-9rv96 [765.835989ms]
Feb 18 00:58:38.903: INFO: Created: latency-svc-w56kl
Feb 18 00:58:38.922: INFO: Got endpoints: latency-svc-w56kl [731.119292ms]
Feb 18 00:58:38.945: INFO: Created: latency-svc-bpphp
Feb 18 00:58:38.964: INFO: Got endpoints: latency-svc-bpphp [731.141662ms]
Feb 18 00:58:39.010: INFO: Created: latency-svc-t4zzq
Feb 18 00:58:39.024: INFO: Got endpoints: latency-svc-t4zzq [748.964531ms]
Feb 18 00:58:39.053: INFO: Created: latency-svc-lwm5v
Feb 18 00:58:39.066: INFO: Got endpoints: latency-svc-lwm5v [684.059263ms]
Feb 18 00:58:39.137: INFO: Created: latency-svc-xqj55
Feb 18 00:58:39.144: INFO: Got endpoints: latency-svc-xqj55 [737.4071ms]
Feb 18 00:58:39.168: INFO: Created: latency-svc-96vbv
Feb 18 00:58:39.174: INFO: Got endpoints: latency-svc-96vbv [684.603109ms]
Feb 18 00:58:39.210: INFO: Created: latency-svc-mh7lg
Feb 18 00:58:39.216: INFO: Got endpoints: latency-svc-mh7lg [677.171687ms]
Feb 18 00:58:39.268: INFO: Created: latency-svc-96jtx
Feb 18 00:58:39.276: INFO: Got endpoints: latency-svc-96jtx [706.731646ms]
Feb 18 00:58:39.298: INFO: Created: latency-svc-hb9tx
Feb 18 00:58:39.316: INFO: Got endpoints: latency-svc-hb9tx [682.924054ms]
Feb 18 00:58:39.317: INFO: Created: latency-svc-2jqlk
Feb 18 00:58:39.330: INFO: Got endpoints: latency-svc-2jqlk [671.511101ms]
Feb 18 00:58:39.358: INFO: Created: latency-svc-54lcd
Feb 18 00:58:39.402: INFO: Got endpoints: latency-svc-54lcd [706.549887ms]
Feb 18 00:58:39.432: INFO: Created: latency-svc-95c78
Feb 18 00:58:39.449: INFO: Got endpoints: latency-svc-95c78 [696.115988ms]
Feb 18 00:58:39.468: INFO: Created: latency-svc-2dgvz
Feb 18 00:58:39.479: INFO: Got endpoints: latency-svc-2dgvz [676.967017ms]
Feb 18 00:58:39.498: INFO: Created: latency-svc-2rdn8
Feb 18 00:58:39.525: INFO: Got endpoints: latency-svc-2rdn8 [687.548959ms]
Feb 18 00:58:39.539: INFO: Created: latency-svc-p9w58
Feb 18 00:58:39.551: INFO: Got endpoints: latency-svc-p9w58 [647.837109ms]
Feb 18 00:58:39.569: INFO: Created: latency-svc-2q67n
Feb 18 00:58:39.581: INFO: Got endpoints: latency-svc-2q67n [659.243793ms]
Feb 18 00:58:39.616: INFO: Created: latency-svc-dllrm
Feb 18 00:58:39.663: INFO: Got endpoints: latency-svc-dllrm [698.78098ms]
Feb 18 00:58:39.688: INFO: Created: latency-svc-8h4fw
Feb 18 00:58:39.707: INFO: Got endpoints: latency-svc-8h4fw [683.714335ms]
Feb 18 00:58:39.725: INFO: Created: latency-svc-hg5vj
Feb 18 00:58:39.744: INFO: Got endpoints: latency-svc-hg5vj [677.25299ms]
Feb 18 00:58:39.813: INFO: Created: latency-svc-b7lc9
Feb 18 00:58:39.845: INFO: Got endpoints: latency-svc-b7lc9 [701.096944ms]
Feb 18 00:58:39.846: INFO: Created: latency-svc-4sxzp
Feb 18 00:58:39.894: INFO: Got endpoints: latency-svc-4sxzp [719.753785ms]
Feb 18 00:58:39.965: INFO: Created: latency-svc-lnfv4
Feb 18 00:58:39.983: INFO: Got endpoints: latency-svc-lnfv4 [767.15158ms]
Feb 18 00:58:40.000: INFO: Created: latency-svc-5qwkc
Feb 18 00:58:40.013: INFO: Got endpoints: latency-svc-5qwkc [737.109021ms]
Feb 18 00:58:40.042: INFO: Created: latency-svc-ckm2j
Feb 18 00:58:40.082: INFO: Got endpoints: latency-svc-ckm2j [766.360426ms]
Feb 18 00:58:40.109: INFO: Created: latency-svc-wk2fq
Feb 18 00:58:40.132: INFO: Got endpoints: latency-svc-wk2fq [801.899524ms]
Feb 18 00:58:40.151: INFO: Created: latency-svc-pg95h
Feb 18 00:58:40.162: INFO: Got endpoints: latency-svc-pg95h [760.319113ms]
Feb 18 00:58:40.175: INFO: Created: latency-svc-wbhvf
Feb 18 00:58:40.226: INFO: Got endpoints: latency-svc-wbhvf [776.534771ms]
Feb 18 00:58:40.245: INFO: Created: latency-svc-h5hxg
Feb 18 00:58:40.264: INFO: Got endpoints: latency-svc-h5hxg [784.939298ms]
Feb 18 00:58:40.287: INFO: Created: latency-svc-blj6t
Feb 18 00:58:40.306: INFO: Got endpoints: latency-svc-blj6t [780.913865ms]
Feb 18 00:58:40.364: INFO: Created: latency-svc-tv5hv
Feb 18 00:58:40.392: INFO: Got endpoints: latency-svc-tv5hv [840.892683ms]
Feb 18 00:58:40.428: INFO: Created: latency-svc-bz84l
Feb 18 00:58:40.445: INFO: Got endpoints: latency-svc-bz84l [863.635724ms]
Feb 18 00:58:40.514: INFO: Created: latency-svc-sqt8v
Feb 18 00:58:40.559: INFO: Got endpoints: latency-svc-sqt8v [896.437758ms]
Feb 18 00:58:40.560: INFO: Created: latency-svc-fcjq8
Feb 18 00:58:40.599: INFO: Got endpoints: latency-svc-fcjq8 [891.304613ms]
Feb 18 00:58:40.658: INFO: Created: latency-svc-2qk69
Feb 18 00:58:40.666: INFO: Got endpoints: latency-svc-2qk69 [922.335679ms]
Feb 18 00:58:40.713: INFO: Created: latency-svc-97qf6
Feb 18 00:58:40.732: INFO: Got endpoints: latency-svc-97qf6 [887.054131ms]
Feb 18 00:58:40.751: INFO: Created: latency-svc-q8tm9
Feb 18 00:58:40.801: INFO: Got endpoints: latency-svc-q8tm9 [906.827034ms]
Feb 18 00:58:40.817: INFO: Created: latency-svc-6qfrm
Feb 18 00:58:40.834: INFO: Got endpoints: latency-svc-6qfrm [850.623123ms]
Feb 18 00:58:40.859: INFO: Created: latency-svc-7qfmn
Feb 18 00:58:40.869: INFO: Got endpoints: latency-svc-7qfmn [855.699553ms]
Feb 18 00:58:40.882: INFO: Created: latency-svc-4khjv
Feb 18 00:58:40.893: INFO: Got endpoints: latency-svc-4khjv [811.411611ms]
Feb 18 00:58:40.938: INFO: Created: latency-svc-6kdfp
Feb 18 00:58:40.959: INFO: Got endpoints: latency-svc-6kdfp [826.827174ms]
Feb 18 00:58:40.959: INFO: Created: latency-svc-xgkbx
Feb 18 00:58:40.977: INFO: Got endpoints: latency-svc-xgkbx [815.188811ms]
Feb 18 00:58:41.038: INFO: Created: latency-svc-xcrcq
Feb 18 00:58:41.076: INFO: Got endpoints: latency-svc-xcrcq [850.339406ms]
Feb 18 00:58:41.097: INFO: Created: latency-svc-pmfvl
Feb 18 00:58:41.115: INFO: Got endpoints: latency-svc-pmfvl [850.92373ms]
Feb 18 00:58:41.147: INFO: Created: latency-svc-cxf7s
Feb 18 00:58:41.163: INFO: Got endpoints: latency-svc-cxf7s [856.721435ms]
Feb 18 00:58:41.214: INFO: Created: latency-svc-88nfk
Feb 18 00:58:41.223: INFO: Got endpoints: latency-svc-88nfk [831.734453ms]
Feb 18 00:58:41.249: INFO: Created: latency-svc-lgldv
Feb 18 00:58:41.266: INFO: Got endpoints: latency-svc-lgldv [821.040218ms]
Feb 18 00:58:41.288: INFO: Created: latency-svc-h8p49
Feb 18 00:58:41.308: INFO: Got endpoints: latency-svc-h8p49 [749.047127ms]
Feb 18 00:58:41.358: INFO: Created: latency-svc-pwbzb
Feb 18 00:58:41.379: INFO: Got endpoints: latency-svc-pwbzb [779.69967ms]
Feb 18 00:58:41.379: INFO: Created: latency-svc-cwmbz
Feb 18 00:58:41.391: INFO: Got endpoints: latency-svc-cwmbz [725.365817ms]
Feb 18 00:58:41.426: INFO: Created: latency-svc-gvx49
Feb 18 00:58:41.451: INFO: Got endpoints: latency-svc-gvx49 [718.889639ms]
Feb 18 00:58:41.494: INFO: Created: latency-svc-cb6qh
Feb 18 00:58:41.512: INFO: Got endpoints: latency-svc-cb6qh [710.967579ms]
Feb 18 00:58:41.548: INFO: Created: latency-svc-mg2jk
Feb 18 00:58:41.558: INFO: Got endpoints: latency-svc-mg2jk [724.156575ms]
Feb 18 00:58:41.572: INFO: Created: latency-svc-rv5ln
Feb 18 00:58:41.669: INFO: Got endpoints: latency-svc-rv5ln [799.853177ms]
Feb 18 00:58:41.672: INFO: Created: latency-svc-5gvt4
Feb 18 00:58:41.684: INFO: Got endpoints: latency-svc-5gvt4 [790.505186ms]
Feb 18 00:58:41.708: INFO: Created: latency-svc-8r5ph
Feb 18 00:58:41.726: INFO: Got endpoints: latency-svc-8r5ph [766.986244ms]
Feb 18 00:58:41.763: INFO: Created: latency-svc-rjgp5
Feb 18 00:58:41.823: INFO: Got endpoints: latency-svc-rjgp5 [845.707929ms]
Feb 18 00:58:41.842: INFO: Created: latency-svc-6j5v9
Feb 18 00:58:41.852: INFO: Got endpoints: latency-svc-6j5v9 [775.624949ms]
Feb 18 00:58:41.866: INFO: Created: latency-svc-9jp98
Feb 18 00:58:41.882: INFO: Got endpoints: latency-svc-9jp98 [767.122735ms]
Feb 18 00:58:41.912: INFO: Created: latency-svc-5h689
Feb 18 00:58:41.962: INFO: Got endpoints: latency-svc-5h689 [799.272309ms]
Feb 18 00:58:41.966: INFO: Created: latency-svc-9c6lr
Feb 18 00:58:41.985: INFO: Got endpoints: latency-svc-9c6lr [761.325222ms]
Feb 18 00:58:42.002: INFO: Created: latency-svc-x2w4q
Feb 18 00:58:42.015: INFO: Got endpoints: latency-svc-x2w4q [749.097569ms]
Feb 18 00:58:42.032: INFO: Created: latency-svc-tlwfp
Feb 18 00:58:42.045: INFO: Got endpoints: latency-svc-tlwfp [736.524522ms]
Feb 18 00:58:42.062: INFO: Created: latency-svc-qq48n
Feb 18 00:58:42.112: INFO: Got endpoints: latency-svc-qq48n [732.966433ms]
Feb 18 00:58:42.118: INFO: Created: latency-svc-lj9fm
Feb 18 00:58:42.135: INFO: Got endpoints: latency-svc-lj9fm [743.312107ms]
Feb 18 00:58:42.172: INFO: Created: latency-svc-6qlb2
Feb 18 00:58:42.189: INFO: Got endpoints: latency-svc-6qlb2 [737.417168ms]
Feb 18 00:58:42.244: INFO: Created: latency-svc-bj6cq
Feb 18 00:58:42.266: INFO: Got endpoints: latency-svc-bj6cq [753.990444ms]
Feb 18 00:58:42.266: INFO: Created: latency-svc-x7sl8
Feb 18 00:58:42.284: INFO: Got endpoints: latency-svc-x7sl8 [725.418716ms]
Feb 18 00:58:42.284: INFO: Latencies: [51.764997ms 80.66196ms 171.774083ms 201.571872ms 241.271191ms 307.683701ms 324.931185ms 360.762702ms 424.89949ms 451.106024ms 504.830362ms 580.285632ms 614.089064ms 647.837109ms 650.230447ms 653.091875ms 657.43651ms 658.863677ms 659.117867ms 659.164037ms 659.243793ms 659.509277ms 666.273ms 671.224994ms 671.440174ms 671.511101ms 676.409454ms 676.653924ms 676.967017ms 677.164019ms 677.171687ms 677.25299ms 677.738116ms 682.924054ms 683.514047ms 683.714335ms 684.059263ms 684.603109ms 685.529737ms 687.548959ms 694.049557ms 694.317973ms 696.033861ms 696.115988ms 698.78098ms 700.977815ms 701.096944ms 701.315681ms 701.345869ms 706.549887ms 706.731646ms 710.967579ms 712.177581ms 713.455713ms 718.694635ms 718.858446ms 718.889639ms 719.293488ms 719.753785ms 720.056068ms 721.309273ms 724.156575ms 724.503897ms 725.365817ms 725.418716ms 729.052692ms 729.997061ms 730.834958ms 730.868964ms 731.119292ms 731.141662ms 732.966433ms 736.524522ms 737.109021ms 737.4071ms 737.413361ms 737.417168ms 743.312107ms 743.317614ms 748.964531ms 749.047127ms 749.097569ms 749.120471ms 753.990444ms 755.057684ms 760.319113ms 761.295445ms 761.325222ms 763.98284ms 764.101136ms 765.835989ms 766.360426ms 766.53057ms 766.986244ms 767.122735ms 767.15158ms 767.761241ms 772.690768ms 773.027711ms 773.344811ms 773.77402ms 774.479455ms 775.624949ms 776.534771ms 777.429768ms 778.272133ms 778.706189ms 779.69967ms 780.913865ms 782.317552ms 784.939298ms 790.505186ms 799.126572ms 799.272309ms 799.853177ms 801.899524ms 807.504367ms 808.538234ms 811.411611ms 814.262333ms 815.188811ms 815.687105ms 820.132742ms 821.040218ms 821.526499ms 821.800294ms 825.690581ms 826.827174ms 826.914432ms 829.323935ms 830.022964ms 831.734453ms 832.122711ms 838.318113ms 838.606401ms 838.924275ms 839.123985ms 840.892683ms 844.44434ms 845.707929ms 846.014667ms 846.415086ms 848.371123ms 850.339406ms 850.623123ms 850.92373ms 852.510325ms 855.699553ms 856.721435ms 857.507303ms 857.768899ms 859.523604ms 860.393443ms 860.870111ms 862.92828ms 863.096223ms 863.635724ms 863.819117ms 864.11311ms 865.999657ms 866.863041ms 868.049785ms 868.3965ms 868.659633ms 868.926629ms 870.786862ms 870.855307ms 870.924943ms 874.958093ms 875.110851ms 877.17507ms 878.087941ms 880.914815ms 881.315636ms 883.374437ms 886.845374ms 887.054131ms 890.491005ms 891.304613ms 892.526619ms 892.600078ms 892.791984ms 892.946963ms 896.437758ms 898.507572ms 898.900813ms 898.921648ms 902.6997ms 906.827034ms 910.468514ms 911.155517ms 912.501441ms 913.811404ms 919.939773ms 922.335679ms 922.370816ms 927.307415ms 950.635872ms 952.395631ms 975.765629ms]
Feb 18 00:58:42.284: INFO: 50 %ile: 773.77402ms
Feb 18 00:58:42.284: INFO: 90 %ile: 892.600078ms
Feb 18 00:58:42.284: INFO: 99 %ile: 952.395631ms
Feb 18 00:58:42.284: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:58:42.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-8296" for this suite.
Feb 18 00:59:06.305: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:59:06.400: INFO: namespace svc-latency-8296 deletion completed in 24.109057022s

• [SLOW TEST:38.818 seconds]
[sig-network] Service endpoints latency
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:59:06.400: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-389aef26-5d02-496a-bfa1-b9044d473b8d
STEP: Creating a pod to test consume secrets
Feb 18 00:59:06.594: INFO: Waiting up to 5m0s for pod "pod-secrets-473c4475-6f57-4610-a7f5-9457658c9b1a" in namespace "secrets-8343" to be "success or failure"
Feb 18 00:59:06.616: INFO: Pod "pod-secrets-473c4475-6f57-4610-a7f5-9457658c9b1a": Phase="Pending", Reason="", readiness=false. Elapsed: 22.21295ms
Feb 18 00:59:08.621: INFO: Pod "pod-secrets-473c4475-6f57-4610-a7f5-9457658c9b1a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026471319s
Feb 18 00:59:10.625: INFO: Pod "pod-secrets-473c4475-6f57-4610-a7f5-9457658c9b1a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030882734s
STEP: Saw pod success
Feb 18 00:59:10.625: INFO: Pod "pod-secrets-473c4475-6f57-4610-a7f5-9457658c9b1a" satisfied condition "success or failure"
Feb 18 00:59:10.628: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-473c4475-6f57-4610-a7f5-9457658c9b1a container secret-volume-test: 
STEP: delete the pod
Feb 18 00:59:10.838: INFO: Waiting for pod pod-secrets-473c4475-6f57-4610-a7f5-9457658c9b1a to disappear
Feb 18 00:59:10.900: INFO: Pod pod-secrets-473c4475-6f57-4610-a7f5-9457658c9b1a no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:59:10.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8343" for this suite.
Feb 18 00:59:16.935: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:59:17.054: INFO: namespace secrets-8343 deletion completed in 6.149919693s

• [SLOW TEST:10.654 seconds]
[sig-storage] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:59:17.054: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support proxy with --port 0  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting the proxy server
Feb 18 00:59:17.133: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:59:17.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9868" for this suite.
Feb 18 00:59:23.266: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:59:23.356: INFO: namespace kubectl-9868 deletion completed in 6.10503459s

• [SLOW TEST:6.302 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support proxy with --port 0  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:59:23.356: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-268e594a-c279-401e-8be2-cc61a01a740b
STEP: Creating a pod to test consume secrets
Feb 18 00:59:23.666: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a1bdca3b-70ee-4699-bd1b-b51b1d4b0047" in namespace "projected-8107" to be "success or failure"
Feb 18 00:59:23.669: INFO: Pod "pod-projected-secrets-a1bdca3b-70ee-4699-bd1b-b51b1d4b0047": Phase="Pending", Reason="", readiness=false. Elapsed: 3.249828ms
Feb 18 00:59:25.729: INFO: Pod "pod-projected-secrets-a1bdca3b-70ee-4699-bd1b-b51b1d4b0047": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063805045s
Feb 18 00:59:27.734: INFO: Pod "pod-projected-secrets-a1bdca3b-70ee-4699-bd1b-b51b1d4b0047": Phase="Running", Reason="", readiness=true. Elapsed: 4.068280401s
Feb 18 00:59:29.738: INFO: Pod "pod-projected-secrets-a1bdca3b-70ee-4699-bd1b-b51b1d4b0047": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.072539342s
STEP: Saw pod success
Feb 18 00:59:29.738: INFO: Pod "pod-projected-secrets-a1bdca3b-70ee-4699-bd1b-b51b1d4b0047" satisfied condition "success or failure"
Feb 18 00:59:29.742: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-a1bdca3b-70ee-4699-bd1b-b51b1d4b0047 container projected-secret-volume-test: 
STEP: delete the pod
Feb 18 00:59:29.782: INFO: Waiting for pod pod-projected-secrets-a1bdca3b-70ee-4699-bd1b-b51b1d4b0047 to disappear
Feb 18 00:59:29.807: INFO: Pod pod-projected-secrets-a1bdca3b-70ee-4699-bd1b-b51b1d4b0047 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:59:29.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8107" for this suite.
Feb 18 00:59:35.837: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 00:59:35.923: INFO: namespace projected-8107 deletion completed in 6.112192704s

• [SLOW TEST:12.567 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 00:59:35.923: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-cda0c4d2-e338-4618-9df9-0966269f76ba
STEP: Creating configMap with name cm-test-opt-upd-2f1bc25c-7d7b-404b-a253-9bac7b584290
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-cda0c4d2-e338-4618-9df9-0966269f76ba
STEP: Updating configmap cm-test-opt-upd-2f1bc25c-7d7b-404b-a253-9bac7b584290
STEP: Creating configMap with name cm-test-opt-create-f302704d-5c24-4d85-8a95-11d7a07bef6b
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 00:59:44.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1787" for this suite.
Feb 18 01:00:06.511: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 01:00:06.617: INFO: namespace projected-1787 deletion completed in 22.172002499s

• [SLOW TEST:30.694 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 01:00:06.617: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 18 01:00:06.709: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f1385d63-52e9-41a3-b44b-49fa81a349d2" in namespace "downward-api-2266" to be "success or failure"
Feb 18 01:00:06.713: INFO: Pod "downwardapi-volume-f1385d63-52e9-41a3-b44b-49fa81a349d2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.500933ms
Feb 18 01:00:08.716: INFO: Pod "downwardapi-volume-f1385d63-52e9-41a3-b44b-49fa81a349d2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007023542s
Feb 18 01:00:10.720: INFO: Pod "downwardapi-volume-f1385d63-52e9-41a3-b44b-49fa81a349d2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01068271s
STEP: Saw pod success
Feb 18 01:00:10.720: INFO: Pod "downwardapi-volume-f1385d63-52e9-41a3-b44b-49fa81a349d2" satisfied condition "success or failure"
Feb 18 01:00:10.723: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-f1385d63-52e9-41a3-b44b-49fa81a349d2 container client-container: 
STEP: delete the pod
Feb 18 01:00:10.834: INFO: Waiting for pod downwardapi-volume-f1385d63-52e9-41a3-b44b-49fa81a349d2 to disappear
Feb 18 01:00:10.863: INFO: Pod downwardapi-volume-f1385d63-52e9-41a3-b44b-49fa81a349d2 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 01:00:10.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2266" for this suite.
Feb 18 01:00:16.986: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 01:00:17.086: INFO: namespace downward-api-2266 deletion completed in 6.2187656s

• [SLOW TEST:10.469 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 01:00:17.087: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-bb1d9b85-5c2e-4e2e-8f40-d7b03e752145
STEP: Creating a pod to test consume configMaps
Feb 18 01:00:17.260: INFO: Waiting up to 5m0s for pod "pod-configmaps-28a5b514-3be4-43d4-82e7-9e4f0c030dff" in namespace "configmap-2447" to be "success or failure"
Feb 18 01:00:17.264: INFO: Pod "pod-configmaps-28a5b514-3be4-43d4-82e7-9e4f0c030dff": Phase="Pending", Reason="", readiness=false. Elapsed: 3.986906ms
Feb 18 01:00:19.281: INFO: Pod "pod-configmaps-28a5b514-3be4-43d4-82e7-9e4f0c030dff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020448122s
Feb 18 01:00:21.285: INFO: Pod "pod-configmaps-28a5b514-3be4-43d4-82e7-9e4f0c030dff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024517314s
STEP: Saw pod success
Feb 18 01:00:21.285: INFO: Pod "pod-configmaps-28a5b514-3be4-43d4-82e7-9e4f0c030dff" satisfied condition "success or failure"
Feb 18 01:00:21.288: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-28a5b514-3be4-43d4-82e7-9e4f0c030dff container configmap-volume-test: 
STEP: delete the pod
Feb 18 01:00:21.331: INFO: Waiting for pod pod-configmaps-28a5b514-3be4-43d4-82e7-9e4f0c030dff to disappear
Feb 18 01:00:21.335: INFO: Pod pod-configmaps-28a5b514-3be4-43d4-82e7-9e4f0c030dff no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 01:00:21.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2447" for this suite.
Feb 18 01:00:27.363: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 01:00:27.459: INFO: namespace configmap-2447 deletion completed in 6.1205685s

• [SLOW TEST:10.372 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 01:00:27.459: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Feb 18 01:00:27.586: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-586,SelfLink:/api/v1/namespaces/watch-586/configmaps/e2e-watch-test-resource-version,UID:43f70bcf-4f91-4d8d-b893-03e66c142644,ResourceVersion:6962554,Generation:0,CreationTimestamp:2021-02-18 01:00:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb 18 01:00:27.586: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-586,SelfLink:/api/v1/namespaces/watch-586/configmaps/e2e-watch-test-resource-version,UID:43f70bcf-4f91-4d8d-b893-03e66c142644,ResourceVersion:6962555,Generation:0,CreationTimestamp:2021-02-18 01:00:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 01:00:27.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-586" for this suite.
Feb 18 01:00:33.638: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 01:00:33.723: INFO: namespace watch-586 deletion completed in 6.132826172s

• [SLOW TEST:6.264 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to start watching from a specific resource version [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 01:00:33.723: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 18 01:00:33.750: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Feb 18 01:00:33.801: INFO: Pod name sample-pod: Found 0 pods out of 1
Feb 18 01:00:38.805: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb 18 01:00:38.805: INFO: Creating deployment "test-rolling-update-deployment"
Feb 18 01:00:38.811: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Feb 18 01:00:38.825: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Feb 18 01:00:40.832: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Feb 18 01:00:40.834: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63749206838, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63749206838, loc:(*time.Location)(0x7edea20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63749206838, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63749206838, loc:(*time.Location)(0x7edea20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 18 01:00:42.838: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Feb 18 01:00:42.847: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-8824,SelfLink:/apis/apps/v1/namespaces/deployment-8824/deployments/test-rolling-update-deployment,UID:689675af-6ca1-4854-b00f-97329d53768c,ResourceVersion:6962634,Generation:1,CreationTimestamp:2021-02-18 01:00:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2021-02-18 01:00:38 +0000 UTC 2021-02-18 01:00:38 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2021-02-18 01:00:42 +0000 UTC 2021-02-18 01:00:38 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Feb 18 01:00:42.850: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-8824,SelfLink:/apis/apps/v1/namespaces/deployment-8824/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:17047814-66e5-4088-a73e-7464ab64bab7,ResourceVersion:6962622,Generation:1,CreationTimestamp:2021-02-18 01:00:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 689675af-6ca1-4854-b00f-97329d53768c 0xc000886f17 0xc000886f18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Feb 18 01:00:42.850: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Feb 18 01:00:42.850: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-8824,SelfLink:/apis/apps/v1/namespaces/deployment-8824/replicasets/test-rolling-update-controller,UID:5709eb43-036a-4da2-8bb8-78da06091e7f,ResourceVersion:6962633,Generation:2,CreationTimestamp:2021-02-18 01:00:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 689675af-6ca1-4854-b00f-97329d53768c 0xc000886e2f 0xc000886e40}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb 18 01:00:42.853: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-7zw6x" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-7zw6x,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-8824,SelfLink:/api/v1/namespaces/deployment-8824/pods/test-rolling-update-deployment-79f6b9d75c-7zw6x,UID:8bf65a77-79e3-4fef-ac0c-40a82c19bf32,ResourceVersion:6962621,Generation:0,CreationTimestamp:2021-02-18 01:00:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c 17047814-66e5-4088-a73e-7464ab64bab7 0xc0030a5707 0xc0030a5708}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-cm6x4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cm6x4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-cm6x4 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0030a5780} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0030a57a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:00:38 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:00:42 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:00:42 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:00:38 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.7,PodIP:10.244.2.158,StartTime:2021-02-18 01:00:38 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2021-02-18 01:00:41 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://33bce90755783b285ac611cb05a8f782a668594fe9d4b9251072dcbafb8492a1}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 01:00:42.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-8824" for this suite.
Feb 18 01:00:49.042: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 01:00:49.127: INFO: namespace deployment-8824 deletion completed in 6.270232912s

• [SLOW TEST:15.404 seconds]
[sig-apps] Deployment
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 01:00:49.127: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Feb 18 01:01:00.211: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 18 01:01:00.245: INFO: Pod pod-with-prestop-http-hook still exists
Feb 18 01:01:02.245: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 18 01:01:02.250: INFO: Pod pod-with-prestop-http-hook still exists
Feb 18 01:01:04.245: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 18 01:01:04.249: INFO: Pod pod-with-prestop-http-hook still exists
Feb 18 01:01:06.245: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 18 01:01:06.250: INFO: Pod pod-with-prestop-http-hook still exists
Feb 18 01:01:08.245: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 18 01:01:08.250: INFO: Pod pod-with-prestop-http-hook still exists
Feb 18 01:01:10.245: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 18 01:01:10.254: INFO: Pod pod-with-prestop-http-hook still exists
Feb 18 01:01:12.245: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 18 01:01:12.250: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 01:01:12.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-7261" for this suite.
Feb 18 01:01:34.338: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 01:01:34.549: INFO: namespace container-lifecycle-hook-7261 deletion completed in 22.286551832s

• [SLOW TEST:45.422 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 01:01:34.550: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override command
Feb 18 01:01:34.717: INFO: Waiting up to 5m0s for pod "client-containers-ea991483-2217-47a8-8ac6-40689e24b6ba" in namespace "containers-9789" to be "success or failure"
Feb 18 01:01:34.758: INFO: Pod "client-containers-ea991483-2217-47a8-8ac6-40689e24b6ba": Phase="Pending", Reason="", readiness=false. Elapsed: 40.656117ms
Feb 18 01:01:36.863: INFO: Pod "client-containers-ea991483-2217-47a8-8ac6-40689e24b6ba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.145340108s
Feb 18 01:01:38.868: INFO: Pod "client-containers-ea991483-2217-47a8-8ac6-40689e24b6ba": Phase="Running", Reason="", readiness=true. Elapsed: 4.150106542s
Feb 18 01:01:40.872: INFO: Pod "client-containers-ea991483-2217-47a8-8ac6-40689e24b6ba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.154709361s
STEP: Saw pod success
Feb 18 01:01:40.872: INFO: Pod "client-containers-ea991483-2217-47a8-8ac6-40689e24b6ba" satisfied condition "success or failure"
Feb 18 01:01:40.875: INFO: Trying to get logs from node iruya-worker pod client-containers-ea991483-2217-47a8-8ac6-40689e24b6ba container test-container: 
STEP: delete the pod
Feb 18 01:01:40.903: INFO: Waiting for pod client-containers-ea991483-2217-47a8-8ac6-40689e24b6ba to disappear
Feb 18 01:01:40.906: INFO: Pod client-containers-ea991483-2217-47a8-8ac6-40689e24b6ba no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 01:01:40.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-9789" for this suite.
Feb 18 01:01:46.944: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 01:01:47.122: INFO: namespace containers-9789 deletion completed in 6.213303604s

• [SLOW TEST:12.572 seconds]
[k8s.io] Docker Containers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 01:01:47.122: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-c1bc84bb-d246-4571-8f3a-c18b9fa9061f
STEP: Creating a pod to test consume secrets
Feb 18 01:01:47.323: INFO: Waiting up to 5m0s for pod "pod-secrets-0f533149-e58c-427e-ad11-1d35794e6b70" in namespace "secrets-4110" to be "success or failure"
Feb 18 01:01:47.326: INFO: Pod "pod-secrets-0f533149-e58c-427e-ad11-1d35794e6b70": Phase="Pending", Reason="", readiness=false. Elapsed: 3.493922ms
Feb 18 01:01:49.330: INFO: Pod "pod-secrets-0f533149-e58c-427e-ad11-1d35794e6b70": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007114423s
Feb 18 01:01:51.334: INFO: Pod "pod-secrets-0f533149-e58c-427e-ad11-1d35794e6b70": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011010657s
STEP: Saw pod success
Feb 18 01:01:51.334: INFO: Pod "pod-secrets-0f533149-e58c-427e-ad11-1d35794e6b70" satisfied condition "success or failure"
Feb 18 01:01:51.337: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-0f533149-e58c-427e-ad11-1d35794e6b70 container secret-volume-test: 
STEP: delete the pod
Feb 18 01:01:51.358: INFO: Waiting for pod pod-secrets-0f533149-e58c-427e-ad11-1d35794e6b70 to disappear
Feb 18 01:01:51.368: INFO: Pod pod-secrets-0f533149-e58c-427e-ad11-1d35794e6b70 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 01:01:51.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4110" for this suite.
Feb 18 01:01:57.384: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 01:01:57.466: INFO: namespace secrets-4110 deletion completed in 6.095034545s

• [SLOW TEST:10.344 seconds]
[sig-storage] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 01:01:57.466: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-618a42a3-1a28-4876-bf94-ecb8acb7b8fc in namespace container-probe-2052
Feb 18 01:02:01.767: INFO: Started pod liveness-618a42a3-1a28-4876-bf94-ecb8acb7b8fc in namespace container-probe-2052
STEP: checking the pod's current state and verifying that restartCount is present
Feb 18 01:02:01.770: INFO: Initial restart count of pod liveness-618a42a3-1a28-4876-bf94-ecb8acb7b8fc is 0
Feb 18 01:02:17.894: INFO: Restart count of pod container-probe-2052/liveness-618a42a3-1a28-4876-bf94-ecb8acb7b8fc is now 1 (16.124390078s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 01:02:17.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-2052" for this suite.
Feb 18 01:02:23.925: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 01:02:24.018: INFO: namespace container-probe-2052 deletion completed in 6.102256936s

• [SLOW TEST:26.551 seconds]
[k8s.io] Probing container
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 01:02:24.019: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name secret-emptykey-test-27be821c-7337-41c5-ae4a-c0f12441e172
[AfterEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 01:02:24.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8247" for this suite.
Feb 18 01:02:30.139: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 01:02:30.234: INFO: namespace secrets-8247 deletion completed in 6.113657837s

• [SLOW TEST:6.216 seconds]
[sig-api-machinery] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should fail to create secret due to empty secret key [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 01:02:30.235: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be submitted and removed [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Feb 18 01:02:30.324: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Feb 18 01:02:39.374: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 01:02:39.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5484" for this suite.
Feb 18 01:02:45.393: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 01:02:45.500: INFO: namespace pods-5484 deletion completed in 6.118324254s

• [SLOW TEST:15.265 seconds]
[k8s.io] Pods
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be submitted and removed [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 01:02:45.500: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-908e5db7-19c5-4fcd-bdfa-43cbad668f79 in namespace container-probe-9822
Feb 18 01:02:49.571: INFO: Started pod liveness-908e5db7-19c5-4fcd-bdfa-43cbad668f79 in namespace container-probe-9822
STEP: checking the pod's current state and verifying that restartCount is present
Feb 18 01:02:49.574: INFO: Initial restart count of pod liveness-908e5db7-19c5-4fcd-bdfa-43cbad668f79 is 0
Feb 18 01:03:01.602: INFO: Restart count of pod container-probe-9822/liveness-908e5db7-19c5-4fcd-bdfa-43cbad668f79 is now 1 (12.027278929s elapsed)
Feb 18 01:03:21.766: INFO: Restart count of pod container-probe-9822/liveness-908e5db7-19c5-4fcd-bdfa-43cbad668f79 is now 2 (32.19180097s elapsed)
Feb 18 01:03:41.814: INFO: Restart count of pod container-probe-9822/liveness-908e5db7-19c5-4fcd-bdfa-43cbad668f79 is now 3 (52.240045659s elapsed)
Feb 18 01:04:02.101: INFO: Restart count of pod container-probe-9822/liveness-908e5db7-19c5-4fcd-bdfa-43cbad668f79 is now 4 (1m12.527092773s elapsed)
Feb 18 01:05:12.630: INFO: Restart count of pod container-probe-9822/liveness-908e5db7-19c5-4fcd-bdfa-43cbad668f79 is now 5 (2m23.055665538s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 01:05:12.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-9822" for this suite.
Feb 18 01:05:18.688: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 01:05:18.776: INFO: namespace container-probe-9822 deletion completed in 6.102088428s

• [SLOW TEST:153.276 seconds]
[k8s.io] Probing container
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 01:05:18.777: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 18 01:05:18.842: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
Feb 18 01:05:20.879: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 01:05:22.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-5456" for this suite.
Feb 18 01:05:28.558: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 01:05:28.648: INFO: namespace replication-controller-5456 deletion completed in 6.338372581s

• [SLOW TEST:9.871 seconds]
[sig-apps] ReplicationController
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 01:05:28.649: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-7b7f361d-996b-4724-a87b-1a50de51e37c
STEP: Creating a pod to test consume secrets
Feb 18 01:05:28.883: INFO: Waiting up to 5m0s for pod "pod-secrets-2b1f3358-3aee-45a3-84f4-d6c6c914b372" in namespace "secrets-3934" to be "success or failure"
Feb 18 01:05:28.893: INFO: Pod "pod-secrets-2b1f3358-3aee-45a3-84f4-d6c6c914b372": Phase="Pending", Reason="", readiness=false. Elapsed: 9.614719ms
Feb 18 01:05:31.021: INFO: Pod "pod-secrets-2b1f3358-3aee-45a3-84f4-d6c6c914b372": Phase="Pending", Reason="", readiness=false. Elapsed: 2.137428488s
Feb 18 01:05:33.026: INFO: Pod "pod-secrets-2b1f3358-3aee-45a3-84f4-d6c6c914b372": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.142203414s
STEP: Saw pod success
Feb 18 01:05:33.026: INFO: Pod "pod-secrets-2b1f3358-3aee-45a3-84f4-d6c6c914b372" satisfied condition "success or failure"
Feb 18 01:05:33.029: INFO: Trying to get logs from node iruya-worker pod pod-secrets-2b1f3358-3aee-45a3-84f4-d6c6c914b372 container secret-volume-test: 
STEP: delete the pod
Feb 18 01:05:33.105: INFO: Waiting for pod pod-secrets-2b1f3358-3aee-45a3-84f4-d6c6c914b372 to disappear
Feb 18 01:05:33.107: INFO: Pod pod-secrets-2b1f3358-3aee-45a3-84f4-d6c6c914b372 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 01:05:33.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3934" for this suite.
Feb 18 01:05:39.125: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 01:05:39.205: INFO: namespace secrets-3934 deletion completed in 6.094787411s

• [SLOW TEST:10.556 seconds]
[sig-storage] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 01:05:39.205: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb 18 01:05:39.306: INFO: Waiting up to 5m0s for pod "downwardapi-volume-87733995-2d24-492c-8ecd-03e759981de3" in namespace "projected-2152" to be "success or failure"
Feb 18 01:05:39.338: INFO: Pod "downwardapi-volume-87733995-2d24-492c-8ecd-03e759981de3": Phase="Pending", Reason="", readiness=false. Elapsed: 32.460981ms
Feb 18 01:05:41.419: INFO: Pod "downwardapi-volume-87733995-2d24-492c-8ecd-03e759981de3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.113104228s
Feb 18 01:05:43.423: INFO: Pod "downwardapi-volume-87733995-2d24-492c-8ecd-03e759981de3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.117555971s
STEP: Saw pod success
Feb 18 01:05:43.423: INFO: Pod "downwardapi-volume-87733995-2d24-492c-8ecd-03e759981de3" satisfied condition "success or failure"
Feb 18 01:05:43.426: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-87733995-2d24-492c-8ecd-03e759981de3 container client-container: 
STEP: delete the pod
Feb 18 01:05:43.507: INFO: Waiting for pod downwardapi-volume-87733995-2d24-492c-8ecd-03e759981de3 to disappear
Feb 18 01:05:43.524: INFO: Pod downwardapi-volume-87733995-2d24-492c-8ecd-03e759981de3 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 01:05:43.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2152" for this suite.
Feb 18 01:05:49.540: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 01:05:49.619: INFO: namespace projected-2152 deletion completed in 6.090873046s

• [SLOW TEST:10.413 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 01:05:49.619: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Feb 18 01:05:54.265: INFO: Successfully updated pod "labelsupdatee4d34996-3476-4593-890b-69db5d48c53f"
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 01:05:58.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1249" for this suite.
Feb 18 01:06:20.600: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 01:06:20.738: INFO: namespace projected-1249 deletion completed in 22.167137496s

• [SLOW TEST:31.119 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 01:06:20.738: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support proportional scaling [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 18 01:06:20.813: INFO: Creating deployment "nginx-deployment"
Feb 18 01:06:20.817: INFO: Waiting for observed generation 1
Feb 18 01:06:22.838: INFO: Waiting for all required pods to come up
Feb 18 01:06:22.843: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Feb 18 01:06:32.852: INFO: Waiting for deployment "nginx-deployment" to complete
Feb 18 01:06:32.857: INFO: Updating deployment "nginx-deployment" with a non-existent image
Feb 18 01:06:32.887: INFO: Updating deployment nginx-deployment
Feb 18 01:06:32.887: INFO: Waiting for observed generation 2
Feb 18 01:06:34.904: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Feb 18 01:06:34.973: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Feb 18 01:06:34.987: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Feb 18 01:06:35.023: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Feb 18 01:06:35.023: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Feb 18 01:06:35.025: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Feb 18 01:06:35.029: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
Feb 18 01:06:35.029: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
Feb 18 01:06:35.035: INFO: Updating deployment nginx-deployment
Feb 18 01:06:35.035: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
Feb 18 01:06:35.294: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Feb 18 01:06:35.392: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Feb 18 01:06:38.001: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-3138,SelfLink:/apis/apps/v1/namespaces/deployment-3138/deployments/nginx-deployment,UID:55449c92-6f29-43a8-83ac-0340202de1f4,ResourceVersion:6963943,Generation:3,CreationTimestamp:2021-02-18 01:06:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Available False 2021-02-18 01:06:35 +0000 UTC 2021-02-18 01:06:35 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2021-02-18 01:06:35 +0000 UTC 2021-02-18 01:06:20 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.}],ReadyReplicas:8,CollisionCount:nil,},}

Feb 18 01:06:38.477: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-3138,SelfLink:/apis/apps/v1/namespaces/deployment-3138/replicasets/nginx-deployment-55fb7cb77f,UID:f4c311f4-5082-4ec6-a9e0-7d08989ff303,ResourceVersion:6963929,Generation:3,CreationTimestamp:2021-02-18 01:06:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 55449c92-6f29-43a8-83ac-0340202de1f4 0xc00326b5d7 0xc00326b5d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb 18 01:06:38.477: INFO: All old ReplicaSets of Deployment "nginx-deployment":
Feb 18 01:06:38.477: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-3138,SelfLink:/apis/apps/v1/namespaces/deployment-3138/replicasets/nginx-deployment-7b8c6f4498,UID:8b47ae5f-04b1-486f-a2b9-31eb688a36f7,ResourceVersion:6963938,Generation:3,CreationTimestamp:2021-02-18 01:06:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 55449c92-6f29-43a8-83ac-0340202de1f4 0xc00326b6a7 0xc00326b6a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
Feb 18 01:06:38.987: INFO: Pod "nginx-deployment-55fb7cb77f-5nnnd" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-5nnnd,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3138,SelfLink:/api/v1/namespaces/deployment-3138/pods/nginx-deployment-55fb7cb77f-5nnnd,UID:80701b28-adcd-436e-a36c-135fa10c257e,ResourceVersion:6963944,Generation:0,CreationTimestamp:2021-02-18 01:06:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f f4c311f4-5082-4ec6-a9e0-7d08989ff303 0xc000228077 0xc000228078}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lpx8c {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lpx8c,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-lpx8c true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0002280f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000228110}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:35 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:35 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.7,PodIP:,StartTime:2021-02-18 01:06:35 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 18 01:06:38.987: INFO: Pod "nginx-deployment-55fb7cb77f-9hpwm" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-9hpwm,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3138,SelfLink:/api/v1/namespaces/deployment-3138/pods/nginx-deployment-55fb7cb77f-9hpwm,UID:bb36df76-c169-431a-8a0d-5050b3658fdf,ResourceVersion:6963960,Generation:0,CreationTimestamp:2021-02-18 01:06:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f f4c311f4-5082-4ec6-a9e0-7d08989ff303 0xc000228270 0xc000228271}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lpx8c {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lpx8c,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-lpx8c true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000228390} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0002283b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:35 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:35 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2021-02-18 01:06:35 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 18 01:06:38.988: INFO: Pod "nginx-deployment-55fb7cb77f-cx79g" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-cx79g,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3138,SelfLink:/api/v1/namespaces/deployment-3138/pods/nginx-deployment-55fb7cb77f-cx79g,UID:08ebf1d4-af76-4dfe-ba58-9d6ede5412ef,ResourceVersion:6963965,Generation:0,CreationTimestamp:2021-02-18 01:06:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f f4c311f4-5082-4ec6-a9e0-7d08989ff303 0xc0002286f0 0xc0002286f1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lpx8c {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lpx8c,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-lpx8c true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000228820} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000228840}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:35 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:35 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.7,PodIP:,StartTime:2021-02-18 01:06:35 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 18 01:06:38.988: INFO: Pod "nginx-deployment-55fb7cb77f-ffh9r" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-ffh9r,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3138,SelfLink:/api/v1/namespaces/deployment-3138/pods/nginx-deployment-55fb7cb77f-ffh9r,UID:287e1bb7-ca76-4352-b1f4-9a059908d129,ResourceVersion:6963972,Generation:0,CreationTimestamp:2021-02-18 01:06:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f f4c311f4-5082-4ec6-a9e0-7d08989ff303 0xc000228910 0xc000228911}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lpx8c {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lpx8c,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-lpx8c true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000228a40} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000228a60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:35 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:35 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2021-02-18 01:06:35 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 18 01:06:38.988: INFO: Pod "nginx-deployment-55fb7cb77f-ggcgj" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-ggcgj,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3138,SelfLink:/api/v1/namespaces/deployment-3138/pods/nginx-deployment-55fb7cb77f-ggcgj,UID:0f42ccbb-747a-4150-ba99-0d3f9da9ece6,ResourceVersion:6964001,Generation:0,CreationTimestamp:2021-02-18 01:06:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f f4c311f4-5082-4ec6-a9e0-7d08989ff303 0xc000228b80 0xc000228b81}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lpx8c {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lpx8c,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-lpx8c true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000228c20} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000228c40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:32 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.1.36,StartTime:2021-02-18 01:06:33 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = NotFound desc = failed to pull and unpack image "docker.io/library/nginx:404": failed to resolve reference "docker.io/library/nginx:404": docker.io/library/nginx:404: not found,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 18 01:06:38.989: INFO: Pod "nginx-deployment-55fb7cb77f-ggmst" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-ggmst,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3138,SelfLink:/api/v1/namespaces/deployment-3138/pods/nginx-deployment-55fb7cb77f-ggmst,UID:04f76b5b-ecbf-4d61-b550-573b1bd88e09,ResourceVersion:6963971,Generation:0,CreationTimestamp:2021-02-18 01:06:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f f4c311f4-5082-4ec6-a9e0-7d08989ff303 0xc000228d50 0xc000228d51}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lpx8c {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lpx8c,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-lpx8c true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000228e00} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000228e20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:35 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:35 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.7,PodIP:,StartTime:2021-02-18 01:06:35 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 18 01:06:38.989: INFO: Pod "nginx-deployment-55fb7cb77f-gv9zl" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-gv9zl,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3138,SelfLink:/api/v1/namespaces/deployment-3138/pods/nginx-deployment-55fb7cb77f-gv9zl,UID:529db1e0-a2d1-4f59-a100-0da20306f217,ResourceVersion:6963848,Generation:0,CreationTimestamp:2021-02-18 01:06:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f f4c311f4-5082-4ec6-a9e0-7d08989ff303 0xc000228ef0 0xc000228ef1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lpx8c {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lpx8c,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-lpx8c true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000228f80} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000228fa0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:32 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2021-02-18 01:06:33 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 18 01:06:38.989: INFO: Pod "nginx-deployment-55fb7cb77f-j44mq" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-j44mq,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3138,SelfLink:/api/v1/namespaces/deployment-3138/pods/nginx-deployment-55fb7cb77f-j44mq,UID:571c2197-d70c-48e4-aa52-7e37a00f3733,ResourceVersion:6963940,Generation:0,CreationTimestamp:2021-02-18 01:06:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f f4c311f4-5082-4ec6-a9e0-7d08989ff303 0xc000229080 0xc000229081}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lpx8c {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lpx8c,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-lpx8c true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000229100} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000229120}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:35 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:35 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2021-02-18 01:06:35 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 18 01:06:38.989: INFO: Pod "nginx-deployment-55fb7cb77f-k6dlv" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-k6dlv,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3138,SelfLink:/api/v1/namespaces/deployment-3138/pods/nginx-deployment-55fb7cb77f-k6dlv,UID:ceeae2c2-fce8-4c95-bb0a-d3ec0a5b7149,ResourceVersion:6963863,Generation:0,CreationTimestamp:2021-02-18 01:06:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f f4c311f4-5082-4ec6-a9e0-7d08989ff303 0xc0002291f0 0xc0002291f1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lpx8c {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lpx8c,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-lpx8c true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000229270} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000229290}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:33 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.7,PodIP:,StartTime:2021-02-18 01:06:33 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 18 01:06:38.989: INFO: Pod "nginx-deployment-55fb7cb77f-mzm58" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-mzm58,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3138,SelfLink:/api/v1/namespaces/deployment-3138/pods/nginx-deployment-55fb7cb77f-mzm58,UID:6f41e5c8-c548-4a5d-99ff-1042797964b3,ResourceVersion:6963981,Generation:0,CreationTimestamp:2021-02-18 01:06:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f f4c311f4-5082-4ec6-a9e0-7d08989ff303 0xc000229360 0xc000229361}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lpx8c {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lpx8c,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-lpx8c true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0002293e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000229400}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:35 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:35 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2021-02-18 01:06:35 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 18 01:06:38.989: INFO: Pod "nginx-deployment-55fb7cb77f-nhpfr" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-nhpfr,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3138,SelfLink:/api/v1/namespaces/deployment-3138/pods/nginx-deployment-55fb7cb77f-nhpfr,UID:38bf8b63-ed5e-4cb0-8013-4da371b3f8cc,ResourceVersion:6963858,Generation:0,CreationTimestamp:2021-02-18 01:06:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f f4c311f4-5082-4ec6-a9e0-7d08989ff303 0xc0002294d0 0xc0002294d1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lpx8c {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lpx8c,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-lpx8c true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000229550} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000229570}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:33 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.7,PodIP:,StartTime:2021-02-18 01:06:33 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 18 01:06:38.990: INFO: Pod "nginx-deployment-55fb7cb77f-nv6mn" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-nv6mn,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3138,SelfLink:/api/v1/namespaces/deployment-3138/pods/nginx-deployment-55fb7cb77f-nv6mn,UID:67ec8346-2b1e-4f74-9dbb-d6c565be258e,ResourceVersion:6964002,Generation:0,CreationTimestamp:2021-02-18 01:06:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f f4c311f4-5082-4ec6-a9e0-7d08989ff303 0xc000229640 0xc000229641}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lpx8c {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lpx8c,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-lpx8c true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0002296c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0002296e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:35 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:35 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.7,PodIP:,StartTime:2021-02-18 01:06:35 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 18 01:06:38.990: INFO: Pod "nginx-deployment-55fb7cb77f-vkjmh" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-vkjmh,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-3138,SelfLink:/api/v1/namespaces/deployment-3138/pods/nginx-deployment-55fb7cb77f-vkjmh,UID:e3f4b4b1-fe1d-4db2-88c1-d75afd694a79,ResourceVersion:6964005,Generation:0,CreationTimestamp:2021-02-18 01:06:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f f4c311f4-5082-4ec6-a9e0-7d08989ff303 0xc0002297b0 0xc0002297b1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lpx8c {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lpx8c,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-lpx8c true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000229830} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000229850}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:32 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:32 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.7,PodIP:10.244.2.170,StartTime:2021-02-18 01:06:32 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = NotFound desc = failed to pull and unpack image "docker.io/library/nginx:404": failed to resolve reference "docker.io/library/nginx:404": docker.io/library/nginx:404: not found,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 18 01:06:38.990: INFO: Pod "nginx-deployment-7b8c6f4498-48x5m" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-48x5m,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3138,SelfLink:/api/v1/namespaces/deployment-3138/pods/nginx-deployment-7b8c6f4498-48x5m,UID:fc2ebc41-89ca-401e-94ed-b95b12772002,ResourceVersion:6963931,Generation:0,CreationTimestamp:2021-02-18 01:06:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8b47ae5f-04b1-486f-a2b9-31eb688a36f7 0xc000229940 0xc000229941}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lpx8c {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lpx8c,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-lpx8c true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0002299b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0002299e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:35 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:35 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2021-02-18 01:06:35 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 18 01:06:38.990: INFO: Pod "nginx-deployment-7b8c6f4498-4ms7h" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-4ms7h,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3138,SelfLink:/api/v1/namespaces/deployment-3138/pods/nginx-deployment-7b8c6f4498-4ms7h,UID:3afea198-2f0a-46e8-afc3-11fd750c62b8,ResourceVersion:6963991,Generation:0,CreationTimestamp:2021-02-18 01:06:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8b47ae5f-04b1-486f-a2b9-31eb688a36f7 0xc000229ab7 0xc000229ab8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lpx8c {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lpx8c,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-lpx8c true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000229b30} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000229b50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:35 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:35 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.7,PodIP:,StartTime:2021-02-18 01:06:35 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 18 01:06:38.990: INFO: Pod "nginx-deployment-7b8c6f4498-5bs84" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-5bs84,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3138,SelfLink:/api/v1/namespaces/deployment-3138/pods/nginx-deployment-7b8c6f4498-5bs84,UID:4b19c1da-f17f-4e29-8c76-b30ec6572bad,ResourceVersion:6963784,Generation:0,CreationTimestamp:2021-02-18 01:06:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8b47ae5f-04b1-486f-a2b9-31eb688a36f7 0xc000229c17 0xc000229c18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lpx8c {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lpx8c,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-lpx8c true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000229cb0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000229cd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:21 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:30 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:30 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:20 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.1.33,StartTime:2021-02-18 01:06:21 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2021-02-18 01:06:30 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://3f8918213e0e78992a4aa2ae7fe97baa18362362e153627cef79c148924eb05c}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 18 01:06:38.990: INFO: Pod "nginx-deployment-7b8c6f4498-64jvl" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-64jvl,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3138,SelfLink:/api/v1/namespaces/deployment-3138/pods/nginx-deployment-7b8c6f4498-64jvl,UID:2bd8be70-1146-467a-ae5f-742b50490757,ResourceVersion:6963952,Generation:0,CreationTimestamp:2021-02-18 01:06:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8b47ae5f-04b1-486f-a2b9-31eb688a36f7 0xc000229dd7 0xc000229dd8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lpx8c {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lpx8c,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-lpx8c true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000229e50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000229e70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:35 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:35 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.7,PodIP:,StartTime:2021-02-18 01:06:35 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 18 01:06:38.990: INFO: Pod "nginx-deployment-7b8c6f4498-8dlhb" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-8dlhb,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3138,SelfLink:/api/v1/namespaces/deployment-3138/pods/nginx-deployment-7b8c6f4498-8dlhb,UID:4e3c1710-7ca9-48dc-b6a0-2cceaa3a7a61,ResourceVersion:6963755,Generation:0,CreationTimestamp:2021-02-18 01:06:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8b47ae5f-04b1-486f-a2b9-31eb688a36f7 0xc000229f37 0xc000229f38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lpx8c {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lpx8c,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-lpx8c true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000229fb0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000229fd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:20 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:27 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:27 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:20 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.1.32,StartTime:2021-02-18 01:06:20 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2021-02-18 01:06:27 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://16ca5b6aecb654c119dd520acb36f92ab058bcc844b07fa4bb92d2fd4fe161bf}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 18 01:06:38.991: INFO: Pod "nginx-deployment-7b8c6f4498-8lwx4" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-8lwx4,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3138,SelfLink:/api/v1/namespaces/deployment-3138/pods/nginx-deployment-7b8c6f4498-8lwx4,UID:708e4de0-b70a-4ec8-9b1a-7690a63b83d5,ResourceVersion:6963956,Generation:0,CreationTimestamp:2021-02-18 01:06:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8b47ae5f-04b1-486f-a2b9-31eb688a36f7 0xc0030620a7 0xc0030620a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lpx8c {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lpx8c,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-lpx8c true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc003062120} {node.kubernetes.io/unreachable Exists  NoExecute 0xc003062140}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:35 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:35 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2021-02-18 01:06:35 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 18 01:06:38.991: INFO: Pod "nginx-deployment-7b8c6f4498-g654w" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-g654w,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3138,SelfLink:/api/v1/namespaces/deployment-3138/pods/nginx-deployment-7b8c6f4498-g654w,UID:258b8532-29aa-4342-a4db-2fa756456a40,ResourceVersion:6963958,Generation:0,CreationTimestamp:2021-02-18 01:06:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8b47ae5f-04b1-486f-a2b9-31eb688a36f7 0xc003062207 0xc003062208}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lpx8c {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lpx8c,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-lpx8c true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc003062280} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0030622a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:35 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:35 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.7,PodIP:,StartTime:2021-02-18 01:06:35 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 18 01:06:38.991: INFO: Pod "nginx-deployment-7b8c6f4498-gbwqt" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-gbwqt,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3138,SelfLink:/api/v1/namespaces/deployment-3138/pods/nginx-deployment-7b8c6f4498-gbwqt,UID:2d103eb1-1254-4b95-b6f9-32bedcba4a57,ResourceVersion:6963992,Generation:0,CreationTimestamp:2021-02-18 01:06:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8b47ae5f-04b1-486f-a2b9-31eb688a36f7 0xc003062367 0xc003062368}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lpx8c {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lpx8c,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-lpx8c true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0030623e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc003062400}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:35 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:35 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2021-02-18 01:06:35 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 18 01:06:38.991: INFO: Pod "nginx-deployment-7b8c6f4498-lxlhr" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-lxlhr,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3138,SelfLink:/api/v1/namespaces/deployment-3138/pods/nginx-deployment-7b8c6f4498-lxlhr,UID:dd4e88fb-deb8-42da-a4e6-de9c7dc87edb,ResourceVersion:6963936,Generation:0,CreationTimestamp:2021-02-18 01:06:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8b47ae5f-04b1-486f-a2b9-31eb688a36f7 0xc0030624d7 0xc0030624d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lpx8c {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lpx8c,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-lpx8c true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc003062550} {node.kubernetes.io/unreachable Exists  NoExecute 0xc003062570}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:35 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:35 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.7,PodIP:,StartTime:2021-02-18 01:06:35 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 18 01:06:38.991: INFO: Pod "nginx-deployment-7b8c6f4498-nsczr" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-nsczr,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3138,SelfLink:/api/v1/namespaces/deployment-3138/pods/nginx-deployment-7b8c6f4498-nsczr,UID:53be6722-fbec-4bf3-8054-69f2639605ec,ResourceVersion:6963968,Generation:0,CreationTimestamp:2021-02-18 01:06:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8b47ae5f-04b1-486f-a2b9-31eb688a36f7 0xc003062637 0xc003062638}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lpx8c {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lpx8c,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-lpx8c true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0030626b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0030626d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:35 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:35 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2021-02-18 01:06:35 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 18 01:06:38.991: INFO: Pod "nginx-deployment-7b8c6f4498-nsdr5" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-nsdr5,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3138,SelfLink:/api/v1/namespaces/deployment-3138/pods/nginx-deployment-7b8c6f4498-nsdr5,UID:0cd74352-50fd-47c5-aaf1-790e5e3236e7,ResourceVersion:6963948,Generation:0,CreationTimestamp:2021-02-18 01:06:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8b47ae5f-04b1-486f-a2b9-31eb688a36f7 0xc003062797 0xc003062798}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lpx8c {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lpx8c,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-lpx8c true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc003062810} {node.kubernetes.io/unreachable Exists  NoExecute 0xc003062830}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:35 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:35 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.3,PodIP:,StartTime:2021-02-18 01:06:35 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 18 01:06:38.991: INFO: Pod "nginx-deployment-7b8c6f4498-p6k6s" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-p6k6s,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3138,SelfLink:/api/v1/namespaces/deployment-3138/pods/nginx-deployment-7b8c6f4498-p6k6s,UID:4686bbda-5c66-4bb4-86bd-4511afd0cd3f,ResourceVersion:6963758,Generation:0,CreationTimestamp:2021-02-18 01:06:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8b47ae5f-04b1-486f-a2b9-31eb688a36f7 0xc0030628f7 0xc0030628f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lpx8c {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lpx8c,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-lpx8c true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc003062990} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0030629b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:20 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:27 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:27 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:20 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.1.31,StartTime:2021-02-18 01:06:20 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2021-02-18 01:06:25 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://e8d84e98d017dc83cfd0b34cde6454257c46ea0294302a69cef24af4005ca551}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 18 01:06:38.992: INFO: Pod "nginx-deployment-7b8c6f4498-pm2xv" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-pm2xv,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3138,SelfLink:/api/v1/namespaces/deployment-3138/pods/nginx-deployment-7b8c6f4498-pm2xv,UID:ee96298c-2a21-4ddb-9316-c929ef2b974c,ResourceVersion:6963805,Generation:0,CreationTimestamp:2021-02-18 01:06:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8b47ae5f-04b1-486f-a2b9-31eb688a36f7 0xc003062a97 0xc003062a98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lpx8c {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lpx8c,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-lpx8c true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc003062b10} {node.kubernetes.io/unreachable Exists  NoExecute 0xc003062b30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:21 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:31 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:31 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:20 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.3,PodIP:10.244.1.34,StartTime:2021-02-18 01:06:21 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2021-02-18 01:06:30 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://d97961b4e7f5b54a3c39ad90c13552ee3874a3e5488c5e968500e586d998cad2}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 18 01:06:38.992: INFO: Pod "nginx-deployment-7b8c6f4498-prgbr" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-prgbr,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3138,SelfLink:/api/v1/namespaces/deployment-3138/pods/nginx-deployment-7b8c6f4498-prgbr,UID:23231bfe-7d59-44eb-91dd-dc6d9bbab4b8,ResourceVersion:6963796,Generation:0,CreationTimestamp:2021-02-18 01:06:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8b47ae5f-04b1-486f-a2b9-31eb688a36f7 0xc003062c07 0xc003062c08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lpx8c {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lpx8c,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-lpx8c true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc003062c80} {node.kubernetes.io/unreachable Exists  NoExecute 0xc003062ca0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:21 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:31 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:31 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:20 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.7,PodIP:10.244.2.168,StartTime:2021-02-18 01:06:21 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2021-02-18 01:06:31 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://2a9d9701fea9a6714f8c464132b33563c221663ea4acf17fb27674048398b28f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 18 01:06:38.992: INFO: Pod "nginx-deployment-7b8c6f4498-r796j" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-r796j,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3138,SelfLink:/api/v1/namespaces/deployment-3138/pods/nginx-deployment-7b8c6f4498-r796j,UID:16273034-87f4-4280-b556-2a6eb3ccd1de,ResourceVersion:6963925,Generation:0,CreationTimestamp:2021-02-18 01:06:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8b47ae5f-04b1-486f-a2b9-31eb688a36f7 0xc003062d77 0xc003062d78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lpx8c {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lpx8c,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-lpx8c true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc003062df0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc003062e10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:35 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:35 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.7,PodIP:,StartTime:2021-02-18 01:06:35 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 18 01:06:38.992: INFO: Pod "nginx-deployment-7b8c6f4498-rj66f" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-rj66f,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3138,SelfLink:/api/v1/namespaces/deployment-3138/pods/nginx-deployment-7b8c6f4498-rj66f,UID:3b70b134-2eec-401f-acb0-5b94a0ef5f10,ResourceVersion:6963777,Generation:0,CreationTimestamp:2021-02-18 01:06:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8b47ae5f-04b1-486f-a2b9-31eb688a36f7 0xc003062ed7 0xc003062ed8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lpx8c {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lpx8c,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-lpx8c true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc003062f50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc003062f70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:20 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:29 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:29 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:20 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.7,PodIP:10.244.2.165,StartTime:2021-02-18 01:06:20 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2021-02-18 01:06:28 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://dc7a7177fabcfc69c7c21f1660364e384124f84d5316bd9bbd394147ca540569}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 18 01:06:38.992: INFO: Pod "nginx-deployment-7b8c6f4498-rsqtl" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-rsqtl,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3138,SelfLink:/api/v1/namespaces/deployment-3138/pods/nginx-deployment-7b8c6f4498-rsqtl,UID:b648ef17-c67d-489e-9d7a-08c76f142ae7,ResourceVersion:6963772,Generation:0,CreationTimestamp:2021-02-18 01:06:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8b47ae5f-04b1-486f-a2b9-31eb688a36f7 0xc003063047 0xc003063048}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lpx8c {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lpx8c,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-lpx8c true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0030630c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0030630e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:21 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:29 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:29 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:20 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.7,PodIP:10.244.2.166,StartTime:2021-02-18 01:06:21 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2021-02-18 01:06:28 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://0c7d64363716aa9924db5b6e0d7d6eeeba6b51b19733241dbd21cc982edaf8c7}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 18 01:06:38.992: INFO: Pod "nginx-deployment-7b8c6f4498-sqgmq" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-sqgmq,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3138,SelfLink:/api/v1/namespaces/deployment-3138/pods/nginx-deployment-7b8c6f4498-sqgmq,UID:08423c70-3198-48df-af14-809ccb44d37e,ResourceVersion:6963979,Generation:0,CreationTimestamp:2021-02-18 01:06:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8b47ae5f-04b1-486f-a2b9-31eb688a36f7 0xc0030631b7 0xc0030631b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lpx8c {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lpx8c,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-lpx8c true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc003063230} {node.kubernetes.io/unreachable Exists  NoExecute 0xc003063250}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:35 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:35 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.7,PodIP:,StartTime:2021-02-18 01:06:35 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 18 01:06:38.992: INFO: Pod "nginx-deployment-7b8c6f4498-vvcnc" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-vvcnc,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3138,SelfLink:/api/v1/namespaces/deployment-3138/pods/nginx-deployment-7b8c6f4498-vvcnc,UID:57f55eb0-97de-49d4-850a-dce5f1443b7a,ResourceVersion:6963993,Generation:0,CreationTimestamp:2021-02-18 01:06:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8b47ae5f-04b1-486f-a2b9-31eb688a36f7 0xc003063317 0xc003063318}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lpx8c {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lpx8c,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-lpx8c true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc003063390} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0030633b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:35 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:35 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:35 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.7,PodIP:,StartTime:2021-02-18 01:06:35 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb 18 01:06:38.993: INFO: Pod "nginx-deployment-7b8c6f4498-w5h4q" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-w5h4q,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-3138,SelfLink:/api/v1/namespaces/deployment-3138/pods/nginx-deployment-7b8c6f4498-w5h4q,UID:0317719b-c91f-493f-bead-66f8a6c004dd,ResourceVersion:6963803,Generation:0,CreationTimestamp:2021-02-18 01:06:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8b47ae5f-04b1-486f-a2b9-31eb688a36f7 0xc003063477 0xc003063478}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-lpx8c {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-lpx8c,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-lpx8c true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc003063500} {node.kubernetes.io/unreachable Exists  NoExecute 0xc003063520}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:21 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:31 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:31 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-02-18 01:06:20 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.7,PodIP:10.244.2.167,StartTime:2021-02-18 01:06:21 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2021-02-18 01:06:30 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://7ee66936886b358b4c12573482677aa065820048fabd815ebddd4d6638ea439d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 01:06:38.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-3138" for this suite.
Feb 18 01:07:06.961: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 01:07:07.046: INFO: namespace deployment-3138 deletion completed in 27.760128618s

• [SLOW TEST:46.308 seconds]
[sig-apps] Deployment
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 01:07:07.046: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-downwardapi-bbg8
STEP: Creating a pod to test atomic-volume-subpath
Feb 18 01:07:07.516: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-bbg8" in namespace "subpath-4650" to be "success or failure"
Feb 18 01:07:07.520: INFO: Pod "pod-subpath-test-downwardapi-bbg8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.741328ms
Feb 18 01:07:09.524: INFO: Pod "pod-subpath-test-downwardapi-bbg8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007450026s
Feb 18 01:07:11.532: INFO: Pod "pod-subpath-test-downwardapi-bbg8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.015938039s
Feb 18 01:07:13.537: INFO: Pod "pod-subpath-test-downwardapi-bbg8": Phase="Running", Reason="", readiness=true. Elapsed: 6.020458847s
Feb 18 01:07:15.540: INFO: Pod "pod-subpath-test-downwardapi-bbg8": Phase="Running", Reason="", readiness=true. Elapsed: 8.024174803s
Feb 18 01:07:17.544: INFO: Pod "pod-subpath-test-downwardapi-bbg8": Phase="Running", Reason="", readiness=true. Elapsed: 10.028200549s
Feb 18 01:07:19.549: INFO: Pod "pod-subpath-test-downwardapi-bbg8": Phase="Running", Reason="", readiness=true. Elapsed: 12.032638181s
Feb 18 01:07:21.553: INFO: Pod "pod-subpath-test-downwardapi-bbg8": Phase="Running", Reason="", readiness=true. Elapsed: 14.036726274s
Feb 18 01:07:23.557: INFO: Pod "pod-subpath-test-downwardapi-bbg8": Phase="Running", Reason="", readiness=true. Elapsed: 16.041122112s
Feb 18 01:07:25.561: INFO: Pod "pod-subpath-test-downwardapi-bbg8": Phase="Running", Reason="", readiness=true. Elapsed: 18.045022883s
Feb 18 01:07:27.565: INFO: Pod "pod-subpath-test-downwardapi-bbg8": Phase="Running", Reason="", readiness=true. Elapsed: 20.048983322s
Feb 18 01:07:29.569: INFO: Pod "pod-subpath-test-downwardapi-bbg8": Phase="Running", Reason="", readiness=true. Elapsed: 22.052891862s
Feb 18 01:07:31.573: INFO: Pod "pod-subpath-test-downwardapi-bbg8": Phase="Running", Reason="", readiness=true. Elapsed: 24.056843781s
Feb 18 01:07:33.577: INFO: Pod "pod-subpath-test-downwardapi-bbg8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.060566939s
STEP: Saw pod success
Feb 18 01:07:33.577: INFO: Pod "pod-subpath-test-downwardapi-bbg8" satisfied condition "success or failure"
Feb 18 01:07:33.579: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-downwardapi-bbg8 container test-container-subpath-downwardapi-bbg8: 
STEP: delete the pod
Feb 18 01:07:33.645: INFO: Waiting for pod pod-subpath-test-downwardapi-bbg8 to disappear
Feb 18 01:07:33.655: INFO: Pod pod-subpath-test-downwardapi-bbg8 no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-bbg8
Feb 18 01:07:33.655: INFO: Deleting pod "pod-subpath-test-downwardapi-bbg8" in namespace "subpath-4650"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 01:07:33.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-4650" for this suite.
Feb 18 01:07:39.683: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 01:07:39.775: INFO: namespace subpath-4650 deletion completed in 6.114767241s

• [SLOW TEST:32.728 seconds]
[sig-storage] Subpath
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 01:07:39.775: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 18 01:07:39.851: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4319'
Feb 18 01:07:42.740: INFO: stderr: ""
Feb 18 01:07:42.740: INFO: stdout: "replicationcontroller/redis-master created\n"
Feb 18 01:07:42.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4319'
Feb 18 01:07:43.097: INFO: stderr: ""
Feb 18 01:07:43.097: INFO: stdout: "service/redis-master created\n"
STEP: Waiting for Redis master to start.
Feb 18 01:07:44.101: INFO: Selector matched 1 pods for map[app:redis]
Feb 18 01:07:44.101: INFO: Found 0 / 1
Feb 18 01:07:45.102: INFO: Selector matched 1 pods for map[app:redis]
Feb 18 01:07:45.102: INFO: Found 0 / 1
Feb 18 01:07:46.102: INFO: Selector matched 1 pods for map[app:redis]
Feb 18 01:07:46.102: INFO: Found 0 / 1
Feb 18 01:07:47.102: INFO: Selector matched 1 pods for map[app:redis]
Feb 18 01:07:47.102: INFO: Found 1 / 1
Feb 18 01:07:47.102: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Feb 18 01:07:47.106: INFO: Selector matched 1 pods for map[app:redis]
Feb 18 01:07:47.106: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb 18 01:07:47.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-6r47z --namespace=kubectl-4319'
Feb 18 01:07:47.217: INFO: stderr: ""
Feb 18 01:07:47.217: INFO: stdout: "Name:           redis-master-6r47z\nNamespace:      kubectl-4319\nPriority:       0\nNode:           iruya-worker/172.18.0.3\nStart Time:     Thu, 18 Feb 2021 01:07:42 +0000\nLabels:         app=redis\n                role=master\nAnnotations:    \nStatus:         Running\nIP:             10.244.1.48\nControlled By:  ReplicationController/redis-master\nContainers:\n  redis-master:\n    Container ID:   containerd://ecbe9b8f2491ab18155447607c93d9ea25c696c3be0b8ef3f138a8f79f90a67c\n    Image:          gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Image ID:       gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Thu, 18 Feb 2021 01:07:45 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-5v9wx (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-5v9wx:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-5v9wx\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From                   Message\n  ----    ------     ----  ----                   -------\n  Normal  Scheduled  5s    default-scheduler      Successfully assigned kubectl-4319/redis-master-6r47z to iruya-worker\n  Normal  Pulled     3s    kubelet, iruya-worker  Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n  Normal  Created    2s    kubelet, iruya-worker  Created container redis-master\n  Normal  Started    2s    kubelet, iruya-worker  Started container redis-master\n"
Feb 18 01:07:47.218: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-4319'
Feb 18 01:07:47.342: INFO: stderr: ""
Feb 18 01:07:47.342: INFO: stdout: "Name:         redis-master\nNamespace:    kubectl-4319\nSelector:     app=redis,role=master\nLabels:       app=redis\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=redis\n           role=master\n  Containers:\n   redis-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  5s    replication-controller  Created pod: redis-master-6r47z\n"
Feb 18 01:07:47.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-4319'
Feb 18 01:07:47.441: INFO: stderr: ""
Feb 18 01:07:47.441: INFO: stdout: "Name:              redis-master\nNamespace:         kubectl-4319\nLabels:            app=redis\n                   role=master\nAnnotations:       \nSelector:          app=redis,role=master\nType:              ClusterIP\nIP:                10.96.31.83\nPort:                6379/TCP\nTargetPort:        redis-server/TCP\nEndpoints:         10.244.1.48:6379\nSession Affinity:  None\nEvents:            \n"
Feb 18 01:07:47.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-control-plane'
Feb 18 01:07:47.570: INFO: stderr: ""
Feb 18 01:07:47.570: INFO: stdout: "Name:               iruya-control-plane\nRoles:              master\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=iruya-control-plane\n                    kubernetes.io/os=linux\n                    node-role.kubernetes.io/master=\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sun, 10 Jan 2021 17:23:46 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\nUnschedulable:      false\nConditions:\n  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----             ------  -----------------                 ------------------                ------                       -------\n  MemoryPressure   False   Thu, 18 Feb 2021 01:07:11 +0000   Sun, 10 Jan 2021 17:23:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure     False   Thu, 18 Feb 2021 01:07:11 +0000   Sun, 10 Jan 2021 17:23:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure      False   Thu, 18 Feb 2021 01:07:11 +0000   Sun, 10 Jan 2021 17:23:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready            True    Thu, 18 Feb 2021 01:07:11 +0000   Sun, 10 Jan 2021 17:26:08 +0000   KubeletReady                 kubelet is posting ready status\nAddresses:\n  InternalIP:  172.18.0.15\n  Hostname:    iruya-control-plane\nCapacity:\n cpu:                16\n ephemeral-storage:  2303189964Ki\n hugepages-1Gi:      0\n hugepages-2Mi:      0\n memory:             131759868Ki\n pods:               110\nAllocatable:\n cpu:                16\n ephemeral-storage:  2303189964Ki\n hugepages-1Gi:      0\n hugepages-2Mi:      0\n memory:             131759868Ki\n pods:               110\nSystem Info:\n Machine ID:                 b6758ea9ce704b29ad724f46768efef2\n System UUID:                a94baec1-fdf5-43c1-b226-5c638d474e06\n Boot ID:                    b267d78b-f69b-4338-80e8-3f4944338e5d\n Kernel Version:             4.15.0-118-generic\n OS Image:                   Ubuntu Groovy Gorilla (development branch)\n Operating System:           linux\n Architecture:               amd64\n Container Runtime Version:  containerd://1.4.0\n Kubelet Version:            v1.15.12\n Kube-Proxy Version:         v1.15.12\nPodCIDR:                     10.244.0.0/24\nProviderID:                  kind://docker/iruya/iruya-control-plane\nNon-terminated Pods:         (6 in total)\n  Namespace                  Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                  ----                                           ------------  ----------  ---------------  -------------  ---\n  kube-system                etcd-iruya-control-plane                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         6d11h\n  kube-system                kindnet-tnff7                                  100m (0%)     100m (0%)   50Mi (0%)        50Mi (0%)      38d\n  kube-system                kube-apiserver-iruya-control-plane             250m (1%)     0 (0%)      0 (0%)           0 (0%)         6d11h\n  kube-system                kube-controller-manager-iruya-control-plane    200m (1%)     0 (0%)      0 (0%)           0 (0%)         38d\n  kube-system                kube-proxy-8jndp                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         38d\n  kube-system                kube-scheduler-iruya-control-plane             100m (0%)     0 (0%)      0 (0%)           0 (0%)         38d\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests   Limits\n  --------           --------   ------\n  cpu                650m (4%)  100m (0%)\n  memory             50Mi (0%)  50Mi (0%)\n  ephemeral-storage  0 (0%)     0 (0%)\nEvents:              \n"
Feb 18 01:07:47.570: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-4319'
Feb 18 01:07:47.685: INFO: stderr: ""
Feb 18 01:07:47.685: INFO: stdout: "Name:         kubectl-4319\nLabels:       e2e-framework=kubectl\n              e2e-run=5d432f22-6cfa-4901-8986-afee7c80f2e1\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo resource limits.\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 01:07:47.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4319" for this suite.
Feb 18 01:08:13.707: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 01:08:13.801: INFO: namespace kubectl-4319 deletion completed in 26.112028872s

• [SLOW TEST:34.026 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl describe
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 01:08:13.802: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's command
Feb 18 01:08:13.902: INFO: Waiting up to 5m0s for pod "var-expansion-5d90a8bc-1975-4c30-828d-7f435b3633a6" in namespace "var-expansion-9364" to be "success or failure"
Feb 18 01:08:13.911: INFO: Pod "var-expansion-5d90a8bc-1975-4c30-828d-7f435b3633a6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.978001ms
Feb 18 01:08:15.915: INFO: Pod "var-expansion-5d90a8bc-1975-4c30-828d-7f435b3633a6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01298514s
Feb 18 01:08:17.919: INFO: Pod "var-expansion-5d90a8bc-1975-4c30-828d-7f435b3633a6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016896682s
STEP: Saw pod success
Feb 18 01:08:17.919: INFO: Pod "var-expansion-5d90a8bc-1975-4c30-828d-7f435b3633a6" satisfied condition "success or failure"
Feb 18 01:08:17.922: INFO: Trying to get logs from node iruya-worker pod var-expansion-5d90a8bc-1975-4c30-828d-7f435b3633a6 container dapi-container: 
STEP: delete the pod
Feb 18 01:08:17.954: INFO: Waiting for pod var-expansion-5d90a8bc-1975-4c30-828d-7f435b3633a6 to disappear
Feb 18 01:08:17.959: INFO: Pod var-expansion-5d90a8bc-1975-4c30-828d-7f435b3633a6 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 01:08:17.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-9364" for this suite.
Feb 18 01:08:24.017: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 01:08:24.103: INFO: namespace var-expansion-9364 deletion completed in 6.140415214s

• [SLOW TEST:10.301 seconds]
[k8s.io] Variable Expansion
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 01:08:24.103: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 18 01:08:24.206: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 01:08:25.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-8131" for this suite.
Feb 18 01:08:31.308: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 01:08:31.424: INFO: namespace custom-resource-definition-8131 deletion completed in 6.126660949s

• [SLOW TEST:7.321 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 01:08:31.425: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve multiport endpoints from pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service multi-endpoint-test in namespace services-6426
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6426 to expose endpoints map[]
Feb 18 01:08:31.604: INFO: Get endpoints failed (40.027705ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Feb 18 01:08:32.608: INFO: successfully validated that service multi-endpoint-test in namespace services-6426 exposes endpoints map[] (1.044125126s elapsed)
STEP: Creating pod pod1 in namespace services-6426
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6426 to expose endpoints map[pod1:[100]]
Feb 18 01:08:35.726: INFO: successfully validated that service multi-endpoint-test in namespace services-6426 exposes endpoints map[pod1:[100]] (3.110666348s elapsed)
STEP: Creating pod pod2 in namespace services-6426
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6426 to expose endpoints map[pod1:[100] pod2:[101]]
Feb 18 01:08:39.820: INFO: successfully validated that service multi-endpoint-test in namespace services-6426 exposes endpoints map[pod1:[100] pod2:[101]] (4.08880393s elapsed)
STEP: Deleting pod pod1 in namespace services-6426
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6426 to expose endpoints map[pod2:[101]]
Feb 18 01:08:40.883: INFO: successfully validated that service multi-endpoint-test in namespace services-6426 exposes endpoints map[pod2:[101]] (1.034331271s elapsed)
STEP: Deleting pod pod2 in namespace services-6426
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6426 to expose endpoints map[]
Feb 18 01:08:41.010: INFO: successfully validated that service multi-endpoint-test in namespace services-6426 exposes endpoints map[] (122.068675ms elapsed)
[AfterEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 01:08:41.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-6426" for this suite.
Feb 18 01:09:03.287: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 01:09:03.377: INFO: namespace services-6426 deletion completed in 22.128939185s
[AfterEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:31.952 seconds]
[sig-network] Services
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 01:09:03.378: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Feb 18 01:09:15.538: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 18 01:09:15.551: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 18 01:09:17.551: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 18 01:09:17.555: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 18 01:09:19.551: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 18 01:09:19.555: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 18 01:09:21.551: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 18 01:09:21.555: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 18 01:09:23.551: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 18 01:09:23.556: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 18 01:09:25.551: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 18 01:09:25.555: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 18 01:09:27.551: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 18 01:09:27.556: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 18 01:09:29.551: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 18 01:09:29.555: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 18 01:09:31.551: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 18 01:09:31.555: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 18 01:09:33.551: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 18 01:09:33.874: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 18 01:09:35.551: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 18 01:09:35.555: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 18 01:09:37.551: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 18 01:09:37.556: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 18 01:09:39.551: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 18 01:09:39.556: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 18 01:09:41.551: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 18 01:09:41.556: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 18 01:09:43.551: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 18 01:09:43.556: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 01:09:43.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-9683" for this suite.
Feb 18 01:10:05.587: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 01:10:05.674: INFO: namespace container-lifecycle-hook-9683 deletion completed in 22.106633488s

• [SLOW TEST:62.296 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 01:10:05.674: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop complex daemon [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 18 01:10:05.773: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Feb 18 01:10:05.779: INFO: Number of nodes with available pods: 0
Feb 18 01:10:05.779: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Feb 18 01:10:05.855: INFO: Number of nodes with available pods: 0
Feb 18 01:10:05.855: INFO: Node iruya-worker is running more than one daemon pod
Feb 18 01:10:06.859: INFO: Number of nodes with available pods: 0
Feb 18 01:10:06.859: INFO: Node iruya-worker is running more than one daemon pod
Feb 18 01:10:07.915: INFO: Number of nodes with available pods: 0
Feb 18 01:10:07.915: INFO: Node iruya-worker is running more than one daemon pod
Feb 18 01:10:08.863: INFO: Number of nodes with available pods: 0
Feb 18 01:10:08.864: INFO: Node iruya-worker is running more than one daemon pod
Feb 18 01:10:09.859: INFO: Number of nodes with available pods: 1
Feb 18 01:10:09.859: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Feb 18 01:10:09.962: INFO: Number of nodes with available pods: 1
Feb 18 01:10:09.962: INFO: Number of running nodes: 0, number of available pods: 1
Feb 18 01:10:10.966: INFO: Number of nodes with available pods: 0
Feb 18 01:10:10.966: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Feb 18 01:10:10.990: INFO: Number of nodes with available pods: 0
Feb 18 01:10:10.990: INFO: Node iruya-worker is running more than one daemon pod
Feb 18 01:10:11.995: INFO: Number of nodes with available pods: 0
Feb 18 01:10:11.995: INFO: Node iruya-worker is running more than one daemon pod
Feb 18 01:10:12.995: INFO: Number of nodes with available pods: 0
Feb 18 01:10:12.995: INFO: Node iruya-worker is running more than one daemon pod
Feb 18 01:10:13.995: INFO: Number of nodes with available pods: 0
Feb 18 01:10:13.995: INFO: Node iruya-worker is running more than one daemon pod
Feb 18 01:10:14.994: INFO: Number of nodes with available pods: 0
Feb 18 01:10:14.994: INFO: Node iruya-worker is running more than one daemon pod
Feb 18 01:10:15.994: INFO: Number of nodes with available pods: 0
Feb 18 01:10:15.994: INFO: Node iruya-worker is running more than one daemon pod
Feb 18 01:10:16.994: INFO: Number of nodes with available pods: 0
Feb 18 01:10:16.994: INFO: Node iruya-worker is running more than one daemon pod
Feb 18 01:10:18.005: INFO: Number of nodes with available pods: 0
Feb 18 01:10:18.005: INFO: Node iruya-worker is running more than one daemon pod
Feb 18 01:10:18.994: INFO: Number of nodes with available pods: 0
Feb 18 01:10:18.995: INFO: Node iruya-worker is running more than one daemon pod
Feb 18 01:10:19.994: INFO: Number of nodes with available pods: 0
Feb 18 01:10:19.994: INFO: Node iruya-worker is running more than one daemon pod
Feb 18 01:10:21.068: INFO: Number of nodes with available pods: 0
Feb 18 01:10:21.069: INFO: Node iruya-worker is running more than one daemon pod
Feb 18 01:10:21.994: INFO: Number of nodes with available pods: 0
Feb 18 01:10:21.995: INFO: Node iruya-worker is running more than one daemon pod
Feb 18 01:10:22.994: INFO: Number of nodes with available pods: 0
Feb 18 01:10:22.994: INFO: Node iruya-worker is running more than one daemon pod
Feb 18 01:10:23.994: INFO: Number of nodes with available pods: 0
Feb 18 01:10:23.994: INFO: Node iruya-worker is running more than one daemon pod
Feb 18 01:10:24.997: INFO: Number of nodes with available pods: 1
Feb 18 01:10:24.997: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4125, will wait for the garbage collector to delete the pods
Feb 18 01:10:25.060: INFO: Deleting DaemonSet.extensions daemon-set took: 5.93314ms
Feb 18 01:10:25.360: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.256515ms
Feb 18 01:10:41.181: INFO: Number of nodes with available pods: 0
Feb 18 01:10:41.181: INFO: Number of running nodes: 0, number of available pods: 0
Feb 18 01:10:41.184: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4125/daemonsets","resourceVersion":"6965050"},"items":null}

Feb 18 01:10:41.186: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4125/pods","resourceVersion":"6965050"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 01:10:41.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-4125" for this suite.
Feb 18 01:10:47.287: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 01:10:47.374: INFO: namespace daemonsets-4125 deletion completed in 6.105931507s

• [SLOW TEST:41.701 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 01:10:47.375: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Feb 18 01:10:47.435: INFO: Waiting up to 5m0s for pod "pod-ac7be7a0-4a55-4a19-9ebc-02f0b03aebd6" in namespace "emptydir-8241" to be "success or failure"
Feb 18 01:10:47.485: INFO: Pod "pod-ac7be7a0-4a55-4a19-9ebc-02f0b03aebd6": Phase="Pending", Reason="", readiness=false. Elapsed: 49.767113ms
Feb 18 01:10:49.586: INFO: Pod "pod-ac7be7a0-4a55-4a19-9ebc-02f0b03aebd6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.151527139s
Feb 18 01:10:51.591: INFO: Pod "pod-ac7be7a0-4a55-4a19-9ebc-02f0b03aebd6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.155779156s
STEP: Saw pod success
Feb 18 01:10:51.591: INFO: Pod "pod-ac7be7a0-4a55-4a19-9ebc-02f0b03aebd6" satisfied condition "success or failure"
Feb 18 01:10:51.594: INFO: Trying to get logs from node iruya-worker pod pod-ac7be7a0-4a55-4a19-9ebc-02f0b03aebd6 container test-container: 
STEP: delete the pod
Feb 18 01:10:51.636: INFO: Waiting for pod pod-ac7be7a0-4a55-4a19-9ebc-02f0b03aebd6 to disappear
Feb 18 01:10:51.643: INFO: Pod pod-ac7be7a0-4a55-4a19-9ebc-02f0b03aebd6 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 01:10:51.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8241" for this suite.
Feb 18 01:10:57.670: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 01:10:57.803: INFO: namespace emptydir-8241 deletion completed in 6.156681232s

• [SLOW TEST:10.428 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 01:10:57.804: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Feb 18 01:10:57.933: INFO: Waiting up to 5m0s for pod "pod-49f7591f-a64f-42a1-9216-e3a66b503ef9" in namespace "emptydir-1557" to be "success or failure"
Feb 18 01:10:57.937: INFO: Pod "pod-49f7591f-a64f-42a1-9216-e3a66b503ef9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.311262ms
Feb 18 01:10:59.969: INFO: Pod "pod-49f7591f-a64f-42a1-9216-e3a66b503ef9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03563364s
Feb 18 01:11:01.995: INFO: Pod "pod-49f7591f-a64f-42a1-9216-e3a66b503ef9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061094969s
Feb 18 01:11:03.998: INFO: Pod "pod-49f7591f-a64f-42a1-9216-e3a66b503ef9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.064841029s
STEP: Saw pod success
Feb 18 01:11:03.998: INFO: Pod "pod-49f7591f-a64f-42a1-9216-e3a66b503ef9" satisfied condition "success or failure"
Feb 18 01:11:04.002: INFO: Trying to get logs from node iruya-worker2 pod pod-49f7591f-a64f-42a1-9216-e3a66b503ef9 container test-container: 
STEP: delete the pod
Feb 18 01:11:04.030: INFO: Waiting for pod pod-49f7591f-a64f-42a1-9216-e3a66b503ef9 to disappear
Feb 18 01:11:04.077: INFO: Pod pod-49f7591f-a64f-42a1-9216-e3a66b503ef9 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 01:11:04.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1557" for this suite.
Feb 18 01:11:10.096: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 01:11:10.180: INFO: namespace emptydir-1557 deletion completed in 6.09969652s

• [SLOW TEST:12.377 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 01:11:10.180: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 01:11:14.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-2394" for this suite.
Feb 18 01:12:06.313: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 01:12:06.405: INFO: namespace kubelet-test-2394 deletion completed in 52.108023681s

• [SLOW TEST:56.225 seconds]
[k8s.io] Kubelet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox Pod with hostAliases
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 01:12:06.406: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create services for rc  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Feb 18 01:12:06.457: INFO: namespace kubectl-1279
Feb 18 01:12:06.457: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1279'
Feb 18 01:12:06.836: INFO: stderr: ""
Feb 18 01:12:06.836: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Feb 18 01:12:07.840: INFO: Selector matched 1 pods for map[app:redis]
Feb 18 01:12:07.840: INFO: Found 0 / 1
Feb 18 01:12:08.881: INFO: Selector matched 1 pods for map[app:redis]
Feb 18 01:12:08.881: INFO: Found 0 / 1
Feb 18 01:12:09.841: INFO: Selector matched 1 pods for map[app:redis]
Feb 18 01:12:09.841: INFO: Found 0 / 1
Feb 18 01:12:10.840: INFO: Selector matched 1 pods for map[app:redis]
Feb 18 01:12:10.840: INFO: Found 1 / 1
Feb 18 01:12:10.840: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Feb 18 01:12:10.843: INFO: Selector matched 1 pods for map[app:redis]
Feb 18 01:12:10.843: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb 18 01:12:10.843: INFO: wait on redis-master startup in kubectl-1279 
Feb 18 01:12:10.843: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-2wqtj redis-master --namespace=kubectl-1279'
Feb 18 01:12:10.954: INFO: stderr: ""
Feb 18 01:12:10.954: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 18 Feb 01:12:09.945 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 18 Feb 01:12:09.945 # Server started, Redis version 3.2.12\n1:M 18 Feb 01:12:09.945 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 18 Feb 01:12:09.945 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Feb 18 01:12:10.954: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-1279'
Feb 18 01:12:11.091: INFO: stderr: ""
Feb 18 01:12:11.091: INFO: stdout: "service/rm2 exposed\n"
Feb 18 01:12:11.101: INFO: Service rm2 in namespace kubectl-1279 found.
STEP: exposing service
Feb 18 01:12:13.108: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-1279'
Feb 18 01:12:13.247: INFO: stderr: ""
Feb 18 01:12:13.247: INFO: stdout: "service/rm3 exposed\n"
Feb 18 01:12:13.257: INFO: Service rm3 in namespace kubectl-1279 found.
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 01:12:15.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1279" for this suite.
Feb 18 01:12:39.315: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 01:12:39.451: INFO: namespace kubectl-1279 deletion completed in 24.18250509s

• [SLOW TEST:33.045 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl expose
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create services for rc  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb 18 01:12:39.451: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb 18 01:12:39.482: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb 18 01:12:43.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7427" for this suite.
Feb 18 01:13:33.656: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb 18 01:13:33.768: INFO: namespace pods-7427 deletion completed in 50.13116193s

• [SLOW TEST:54.317 seconds]
[k8s.io] Pods
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSFeb 18 01:13:33.769: INFO: Running AfterSuite actions on all nodes
Feb 18 01:13:33.769: INFO: Running AfterSuite actions on node 1
Feb 18 01:13:33.769: INFO: Skipping dumping logs from cluster

Ran 215 of 4413 Specs in 9707.713 seconds
SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4198 Skipped
PASS