I0805 12:55:51.132655 6 e2e.go:243] Starting e2e run "c242b5bc-99b4-4980-bd2d-5cd4ac7b2498" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1596632150 - Will randomize all specs Will run 215 of 4413 specs Aug 5 12:55:51.322: INFO: >>> kubeConfig: /root/.kube/config Aug 5 12:55:51.327: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Aug 5 12:55:51.349: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Aug 5 12:55:51.378: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Aug 5 12:55:51.378: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Aug 5 12:55:51.378: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Aug 5 12:55:51.388: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Aug 5 12:55:51.388: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Aug 5 12:55:51.388: INFO: e2e test version: v1.15.12 Aug 5 12:55:51.389: INFO: kube-apiserver version: v1.15.12 SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 5 12:55:51.389: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets Aug 5 12:55:51.544: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Aug 5 12:55:51.579: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 5 12:55:51.660: INFO: Number of nodes with available pods: 0 Aug 5 12:55:51.660: INFO: Node iruya-worker is running more than one daemon pod Aug 5 12:55:52.775: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 5 12:55:52.779: INFO: Number of nodes with available pods: 0 Aug 5 12:55:52.779: INFO: Node iruya-worker is running more than one daemon pod Aug 5 12:55:53.665: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 5 12:55:53.669: INFO: Number of nodes with available pods: 0 Aug 5 12:55:53.669: INFO: Node iruya-worker is running more than one daemon pod Aug 5 12:55:54.694: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 5 12:55:54.697: INFO: Number of nodes with available pods: 0 Aug 5 12:55:54.697: INFO: Node iruya-worker is running more than one daemon pod Aug 5 12:55:55.739: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 5 12:55:55.809: INFO: Number of nodes with available pods: 0 Aug 5 12:55:55.809: INFO: Node iruya-worker is running more than one daemon pod Aug 5 12:55:56.703: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 5 12:55:56.707: INFO: Number of nodes with available pods: 1 Aug 5 12:55:56.707: INFO: Node iruya-worker2 is running more than one daemon pod Aug 5 12:55:57.679: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 5 12:55:57.682: INFO: Number of nodes with available pods: 2 Aug 5 12:55:57.682: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Aug 5 12:55:57.724: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 5 12:55:57.728: INFO: Number of nodes with available pods: 1 Aug 5 12:55:57.728: INFO: Node iruya-worker is running more than one daemon pod Aug 5 12:55:58.733: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 5 12:55:58.996: INFO: Number of nodes with available pods: 1 Aug 5 12:55:58.996: INFO: Node iruya-worker is running more than one daemon pod Aug 5 12:55:59.734: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 5 12:55:59.737: INFO: Number of nodes with available pods: 1 Aug 5 12:55:59.737: INFO: Node iruya-worker is running more than one daemon pod Aug 5 12:56:00.732: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 5 12:56:00.735: INFO: Number of nodes with available pods: 1 Aug 5 12:56:00.735: INFO: Node iruya-worker is running more than one daemon pod Aug 5 12:56:01.734: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 5 12:56:01.738: INFO: Number of nodes with available pods: 1 Aug 5 12:56:01.738: INFO: Node iruya-worker is running more than one daemon pod Aug 5 12:56:02.745: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 5 12:56:02.748: INFO: Number of nodes with available pods: 1 Aug 5 12:56:02.748: INFO: Node iruya-worker is running more than one daemon pod Aug 5 12:56:03.734: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 5 12:56:03.738: INFO: Number of nodes with available pods: 1 Aug 5 12:56:03.738: INFO: Node iruya-worker is running more than one daemon pod Aug 5 12:56:04.732: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 5 12:56:04.736: INFO: Number of nodes with available pods: 1 Aug 5 12:56:04.736: INFO: Node iruya-worker is running more than one daemon pod Aug 5 12:56:05.735: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 5 12:56:05.738: INFO: Number of nodes with available pods: 1 Aug 5 12:56:05.738: INFO: Node iruya-worker is running more than one daemon pod Aug 5 12:56:07.237: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 5 12:56:07.249: INFO: Number of nodes with available pods: 1 Aug 5 12:56:07.249: INFO: Node iruya-worker is running more than one daemon pod Aug 5 12:56:07.732: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 5 12:56:07.735: INFO: Number of nodes with available pods: 1 Aug 5 12:56:07.735: INFO: Node iruya-worker is running more than one daemon pod Aug 5 12:56:08.732: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 5 12:56:08.735: INFO: Number of nodes with available pods: 2 Aug 5 12:56:08.735: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7661, will wait for the garbage collector to delete the pods Aug 5 12:56:08.796: INFO: Deleting DaemonSet.extensions daemon-set took: 6.169457ms Aug 5 12:56:09.097: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.219777ms Aug 5 12:56:16.400: INFO: Number of nodes with available pods: 0 Aug 5 12:56:16.400: INFO: Number of running nodes: 0, number of available pods: 0 Aug 5 12:56:16.405: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7661/daemonsets","resourceVersion":"3087066"},"items":null} Aug 5 12:56:16.408: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7661/pods","resourceVersion":"3087066"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 5 12:56:16.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7661" for this suite. Aug 5 12:56:22.435: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 5 12:56:22.507: INFO: namespace daemonsets-7661 deletion completed in 6.086396708s • [SLOW TEST:31.118 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 5 12:56:22.508: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Aug 5 12:56:22.563: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 5 12:56:30.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-522" for this suite. Aug 5 12:56:36.139: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 5 12:56:36.216: INFO: namespace init-container-522 deletion completed in 6.113296904s • [SLOW TEST:13.708 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 5 12:56:36.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Aug 5 12:56:36.323: INFO: Waiting up to 5m0s for pod "pod-cac81ebf-bbe6-4c72-8873-8a64a17177ee" in namespace "emptydir-7861" to be "success or failure" Aug 5 12:56:36.327: INFO: Pod "pod-cac81ebf-bbe6-4c72-8873-8a64a17177ee": Phase="Pending", Reason="", readiness=false. Elapsed: 3.468268ms Aug 5 12:56:38.331: INFO: Pod "pod-cac81ebf-bbe6-4c72-8873-8a64a17177ee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007764176s Aug 5 12:56:40.335: INFO: Pod "pod-cac81ebf-bbe6-4c72-8873-8a64a17177ee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011410148s STEP: Saw pod success Aug 5 12:56:40.335: INFO: Pod "pod-cac81ebf-bbe6-4c72-8873-8a64a17177ee" satisfied condition "success or failure" Aug 5 12:56:40.337: INFO: Trying to get logs from node iruya-worker pod pod-cac81ebf-bbe6-4c72-8873-8a64a17177ee container test-container: STEP: delete the pod Aug 5 12:56:40.550: INFO: Waiting for pod pod-cac81ebf-bbe6-4c72-8873-8a64a17177ee to disappear Aug 5 12:56:40.555: INFO: Pod pod-cac81ebf-bbe6-4c72-8873-8a64a17177ee no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 5 12:56:40.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7861" for this suite. Aug 5 12:56:46.584: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 5 12:56:46.657: INFO: namespace emptydir-7861 deletion completed in 6.099516688s • [SLOW TEST:10.441 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 5 12:56:46.658: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-b4909384-d49f-440c-961f-f8a27bc1a4d0 STEP: Creating a pod to test consume secrets Aug 5 12:56:46.854: INFO: Waiting up to 5m0s for pod "pod-secrets-310160b6-476c-4af3-9c6d-9dacc4e12a76" in namespace "secrets-1299" to be "success or failure" Aug 5 12:56:46.870: INFO: Pod "pod-secrets-310160b6-476c-4af3-9c6d-9dacc4e12a76": Phase="Pending", Reason="", readiness=false. Elapsed: 16.006439ms Aug 5 12:56:48.874: INFO: Pod "pod-secrets-310160b6-476c-4af3-9c6d-9dacc4e12a76": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0197402s Aug 5 12:56:50.878: INFO: Pod "pod-secrets-310160b6-476c-4af3-9c6d-9dacc4e12a76": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023861212s Aug 5 12:56:52.881: INFO: Pod "pod-secrets-310160b6-476c-4af3-9c6d-9dacc4e12a76": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.027282753s STEP: Saw pod success Aug 5 12:56:52.881: INFO: Pod "pod-secrets-310160b6-476c-4af3-9c6d-9dacc4e12a76" satisfied condition "success or failure" Aug 5 12:56:52.884: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-310160b6-476c-4af3-9c6d-9dacc4e12a76 container secret-volume-test: STEP: delete the pod Aug 5 12:56:53.125: INFO: Waiting for pod pod-secrets-310160b6-476c-4af3-9c6d-9dacc4e12a76 to disappear Aug 5 12:56:53.319: INFO: Pod pod-secrets-310160b6-476c-4af3-9c6d-9dacc4e12a76 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 5 12:56:53.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1299" for this suite. Aug 5 12:56:59.386: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 5 12:56:59.463: INFO: namespace secrets-1299 deletion completed in 6.140321336s STEP: Destroying namespace "secret-namespace-9010" for this suite. Aug 5 12:57:05.481: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 5 12:57:05.559: INFO: namespace secret-namespace-9010 deletion completed in 6.096174905s • [SLOW TEST:18.902 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 5 12:57:05.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override all Aug 5 12:57:05.641: INFO: Waiting up to 5m0s for pod "client-containers-d9bcfb70-8afb-46ee-9298-44e0df4a88af" in namespace "containers-8567" to be "success or failure" Aug 5 12:57:05.655: INFO: Pod "client-containers-d9bcfb70-8afb-46ee-9298-44e0df4a88af": Phase="Pending", Reason="", readiness=false. Elapsed: 13.810443ms Aug 5 12:57:07.659: INFO: Pod "client-containers-d9bcfb70-8afb-46ee-9298-44e0df4a88af": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0177925s Aug 5 12:57:09.663: INFO: Pod "client-containers-d9bcfb70-8afb-46ee-9298-44e0df4a88af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021553238s STEP: Saw pod success Aug 5 12:57:09.663: INFO: Pod "client-containers-d9bcfb70-8afb-46ee-9298-44e0df4a88af" satisfied condition "success or failure" Aug 5 12:57:09.665: INFO: Trying to get logs from node iruya-worker pod client-containers-d9bcfb70-8afb-46ee-9298-44e0df4a88af container test-container: STEP: delete the pod Aug 5 12:57:09.699: INFO: Waiting for pod client-containers-d9bcfb70-8afb-46ee-9298-44e0df4a88af to disappear Aug 5 12:57:09.715: INFO: Pod client-containers-d9bcfb70-8afb-46ee-9298-44e0df4a88af no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 5 12:57:09.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8567" for this suite. Aug 5 12:57:15.729: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 5 12:57:15.814: INFO: namespace containers-8567 deletion completed in 6.095694089s • [SLOW TEST:10.254 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 5 12:57:15.814: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 5 12:57:16.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3207" for this suite. Aug 5 12:57:22.113: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 5 12:57:22.185: INFO: namespace kubelet-test-3207 deletion completed in 6.114080449s • [SLOW TEST:6.371 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 5 12:57:22.186: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-23abfb05-d6a5-484d-aaef-dc03c62eb9ba STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 5 12:57:26.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3757" for this suite. Aug 5 12:57:48.523: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 5 12:57:48.653: INFO: namespace configmap-3757 deletion completed in 22.170643763s • [SLOW TEST:26.468 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 5 12:57:48.654: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Aug 5 12:57:48.778: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9fc99998-29bc-48d1-a4cf-719bd28e60bd" in namespace "downward-api-454" to be "success or failure" Aug 5 12:57:48.845: INFO: Pod "downwardapi-volume-9fc99998-29bc-48d1-a4cf-719bd28e60bd": Phase="Pending", Reason="", readiness=false. Elapsed: 66.739425ms Aug 5 12:57:50.849: INFO: Pod "downwardapi-volume-9fc99998-29bc-48d1-a4cf-719bd28e60bd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07061829s Aug 5 12:57:52.853: INFO: Pod "downwardapi-volume-9fc99998-29bc-48d1-a4cf-719bd28e60bd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.074710185s STEP: Saw pod success Aug 5 12:57:52.853: INFO: Pod "downwardapi-volume-9fc99998-29bc-48d1-a4cf-719bd28e60bd" satisfied condition "success or failure" Aug 5 12:57:52.855: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-9fc99998-29bc-48d1-a4cf-719bd28e60bd container client-container: STEP: delete the pod Aug 5 12:57:52.895: INFO: Waiting for pod downwardapi-volume-9fc99998-29bc-48d1-a4cf-719bd28e60bd to disappear Aug 5 12:57:52.904: INFO: Pod downwardapi-volume-9fc99998-29bc-48d1-a4cf-719bd28e60bd no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 5 12:57:52.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-454" for this suite. Aug 5 12:57:58.977: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 5 12:57:59.219: INFO: namespace downward-api-454 deletion completed in 6.311380886s • [SLOW TEST:10.565 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 5 12:57:59.219: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service endpoint-test2 in namespace services-8626 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8626 to expose endpoints map[] Aug 5 12:58:00.019: INFO: successfully validated that service endpoint-test2 in namespace services-8626 exposes endpoints map[] (285.238055ms elapsed) STEP: Creating pod pod1 in namespace services-8626 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8626 to expose endpoints map[pod1:[80]] Aug 5 12:58:04.139: INFO: successfully validated that service endpoint-test2 in namespace services-8626 exposes endpoints map[pod1:[80]] (4.087984321s elapsed) STEP: Creating pod pod2 in namespace services-8626 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8626 to expose endpoints map[pod1:[80] pod2:[80]] Aug 5 12:58:07.305: INFO: successfully validated that service endpoint-test2 in namespace services-8626 exposes endpoints map[pod1:[80] pod2:[80]] (3.161407156s elapsed) STEP: Deleting pod pod1 in namespace services-8626 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8626 to expose endpoints map[pod2:[80]] Aug 5 12:58:08.368: INFO: successfully validated that service endpoint-test2 in namespace services-8626 exposes endpoints map[pod2:[80]] (1.058434357s elapsed) STEP: Deleting pod pod2 in namespace services-8626 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8626 to expose endpoints map[] Aug 5 12:58:09.428: INFO: successfully validated that service endpoint-test2 in namespace services-8626 exposes endpoints map[] (1.054754457s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 5 12:58:09.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8626" for this suite. Aug 5 12:58:31.513: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 5 12:58:31.579: INFO: namespace services-8626 deletion completed in 22.079527438s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:32.360 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 5 12:58:31.579: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Aug 5 12:58:31.687: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Aug 5 12:58:33.748: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 5 12:58:34.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9100" for this suite. Aug 5 12:58:42.892: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 5 12:58:42.971: INFO: namespace replication-controller-9100 deletion completed in 8.115495078s • [SLOW TEST:11.392 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 5 12:58:42.972: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Aug 5 12:58:43.058: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d5b96849-7aaf-4daa-8dbd-4cd03eef7b7b" in namespace "projected-9677" to be "success or failure" Aug 5 12:58:43.123: INFO: Pod "downwardapi-volume-d5b96849-7aaf-4daa-8dbd-4cd03eef7b7b": Phase="Pending", Reason="", readiness=false. Elapsed: 65.092353ms Aug 5 12:58:45.127: INFO: Pod "downwardapi-volume-d5b96849-7aaf-4daa-8dbd-4cd03eef7b7b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06884996s Aug 5 12:58:47.130: INFO: Pod "downwardapi-volume-d5b96849-7aaf-4daa-8dbd-4cd03eef7b7b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.071586958s STEP: Saw pod success Aug 5 12:58:47.130: INFO: Pod "downwardapi-volume-d5b96849-7aaf-4daa-8dbd-4cd03eef7b7b" satisfied condition "success or failure" Aug 5 12:58:47.132: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-d5b96849-7aaf-4daa-8dbd-4cd03eef7b7b container client-container: STEP: delete the pod Aug 5 12:58:47.252: INFO: Waiting for pod downwardapi-volume-d5b96849-7aaf-4daa-8dbd-4cd03eef7b7b to disappear Aug 5 12:58:47.297: INFO: Pod downwardapi-volume-d5b96849-7aaf-4daa-8dbd-4cd03eef7b7b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 5 12:58:47.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9677" for this suite. Aug 5 12:58:53.318: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 5 12:58:53.390: INFO: namespace projected-9677 deletion completed in 6.087424517s • [SLOW TEST:10.418 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 5 12:58:53.390: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 5 12:59:25.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1620" for this suite. Aug 5 12:59:31.791: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 5 12:59:31.864: INFO: namespace container-runtime-1620 deletion completed in 6.085826593s • [SLOW TEST:38.474 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 5 12:59:31.865: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-8747 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-8747 STEP: Creating statefulset with conflicting port in namespace statefulset-8747 STEP: Waiting until pod test-pod will start running in namespace statefulset-8747 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-8747 Aug 5 12:59:38.010: INFO: Observed stateful pod in namespace: statefulset-8747, name: ss-0, uid: 11127aa1-749a-4477-840c-712d967f4cea, status phase: Pending. Waiting for statefulset controller to delete. Aug 5 12:59:38.153: INFO: Observed stateful pod in namespace: statefulset-8747, name: ss-0, uid: 11127aa1-749a-4477-840c-712d967f4cea, status phase: Failed. Waiting for statefulset controller to delete. Aug 5 12:59:38.161: INFO: Observed stateful pod in namespace: statefulset-8747, name: ss-0, uid: 11127aa1-749a-4477-840c-712d967f4cea, status phase: Failed. Waiting for statefulset controller to delete. Aug 5 12:59:38.203: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-8747 STEP: Removing pod with conflicting port in namespace statefulset-8747 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-8747 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Aug 5 12:59:44.380: INFO: Deleting all statefulset in ns statefulset-8747 Aug 5 12:59:44.383: INFO: Scaling statefulset ss to 0 Aug 5 13:00:04.446: INFO: Waiting for statefulset status.replicas updated to 0 Aug 5 13:00:04.450: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 5 13:00:04.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8747" for this suite. Aug 5 13:00:12.512: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 5 13:00:12.584: INFO: namespace statefulset-8747 deletion completed in 8.093185961s • [SLOW TEST:40.720 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 5 13:00:12.584: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Aug 5 13:00:12.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6751' Aug 5 13:00:15.577: INFO: stderr: "" Aug 5 13:00:15.577: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Aug 5 13:00:16.581: INFO: Selector matched 1 pods for map[app:redis] Aug 5 13:00:16.581: INFO: Found 0 / 1 Aug 5 13:00:17.623: INFO: Selector matched 1 pods for map[app:redis] Aug 5 13:00:17.623: INFO: Found 0 / 1 Aug 5 13:00:18.582: INFO: Selector matched 1 pods for map[app:redis] Aug 5 13:00:18.582: INFO: Found 0 / 1 Aug 5 13:00:19.581: INFO: Selector matched 1 pods for map[app:redis] Aug 5 13:00:19.581: INFO: Found 1 / 1 Aug 5 13:00:19.581: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Aug 5 13:00:19.584: INFO: Selector matched 1 pods for map[app:redis] Aug 5 13:00:19.584: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Aug 5 13:00:19.584: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-pghrn --namespace=kubectl-6751 -p {"metadata":{"annotations":{"x":"y"}}}' Aug 5 13:00:19.674: INFO: stderr: "" Aug 5 13:00:19.674: INFO: stdout: "pod/redis-master-pghrn patched\n" STEP: checking annotations Aug 5 13:00:19.724: INFO: Selector matched 1 pods for map[app:redis] Aug 5 13:00:19.724: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 5 13:00:19.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6751" for this suite. Aug 5 13:00:41.745: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 5 13:00:41.891: INFO: namespace kubectl-6751 deletion completed in 22.162546176s • [SLOW TEST:29.307 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 5 13:00:41.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 5 13:00:45.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7977" for this suite. Aug 5 13:01:31.998: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 5 13:01:32.071: INFO: namespace kubelet-test-7977 deletion completed in 46.107977305s • [SLOW TEST:50.179 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 5 13:01:32.072: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-fcea96ba-d7cb-49da-a0db-3dfeb1908999 STEP: Creating a pod to test consume configMaps Aug 5 13:01:32.141: INFO: Waiting up to 5m0s for pod "pod-configmaps-65c99dde-5687-499f-b5c4-3cc802dc06c6" in namespace "configmap-1001" to be "success or failure" Aug 5 13:01:32.152: INFO: Pod "pod-configmaps-65c99dde-5687-499f-b5c4-3cc802dc06c6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.727062ms Aug 5 13:01:34.157: INFO: Pod "pod-configmaps-65c99dde-5687-499f-b5c4-3cc802dc06c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015440553s Aug 5 13:01:36.161: INFO: Pod "pod-configmaps-65c99dde-5687-499f-b5c4-3cc802dc06c6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019525457s STEP: Saw pod success Aug 5 13:01:36.161: INFO: Pod "pod-configmaps-65c99dde-5687-499f-b5c4-3cc802dc06c6" satisfied condition "success or failure" Aug 5 13:01:36.164: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-65c99dde-5687-499f-b5c4-3cc802dc06c6 container configmap-volume-test: STEP: delete the pod Aug 5 13:01:36.181: INFO: Waiting for pod pod-configmaps-65c99dde-5687-499f-b5c4-3cc802dc06c6 to disappear Aug 5 13:01:36.187: INFO: Pod pod-configmaps-65c99dde-5687-499f-b5c4-3cc802dc06c6 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 5 13:01:36.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1001" for this suite. Aug 5 13:01:42.241: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 5 13:01:42.308: INFO: namespace configmap-1001 deletion completed in 6.097653419s • [SLOW TEST:10.237 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 5 13:01:42.308: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token STEP: reading a file in the container Aug 5 13:01:46.958: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1305 pod-service-account-6a79f4f8-e89a-4c9b-b2e8-811b796bbb42 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Aug 5 13:01:47.140: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1305 pod-service-account-6a79f4f8-e89a-4c9b-b2e8-811b796bbb42 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Aug 5 13:01:47.354: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1305 pod-service-account-6a79f4f8-e89a-4c9b-b2e8-811b796bbb42 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 5 13:01:47.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-1305" for this suite. Aug 5 13:01:53.561: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 5 13:01:53.637: INFO: namespace svcaccounts-1305 deletion completed in 6.088298719s • [SLOW TEST:11.329 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 5 13:01:53.638: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 5 13:01:53.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6488" for this suite. Aug 5 13:02:15.830: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 5 13:02:15.910: INFO: namespace pods-6488 deletion completed in 22.146672922s • [SLOW TEST:22.272 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 5 13:02:15.910: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-981 STEP: creating a selector STEP: Creating the service pods in kubernetes Aug 5 13:02:15.974: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Aug 5 13:02:40.092: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.17:8080/dial?request=hostName&protocol=http&host=10.244.2.65&port=8080&tries=1'] Namespace:pod-network-test-981 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 5 13:02:40.092: INFO: >>> kubeConfig: /root/.kube/config I0805 13:02:40.121724 6 log.go:172] (0xc001f3e790) (0xc0018b6960) Create stream I0805 13:02:40.121787 6 log.go:172] (0xc001f3e790) (0xc0018b6960) Stream added, broadcasting: 1 I0805 13:02:40.123603 6 log.go:172] (0xc001f3e790) Reply frame received for 1 I0805 13:02:40.123646 6 log.go:172] (0xc001f3e790) (0xc0018b6a00) Create stream I0805 13:02:40.123662 6 log.go:172] (0xc001f3e790) (0xc0018b6a00) Stream added, broadcasting: 3 I0805 13:02:40.124411 6 log.go:172] (0xc001f3e790) Reply frame received for 3 I0805 13:02:40.124438 6 log.go:172] (0xc001f3e790) (0xc0002f4d20) Create stream I0805 13:02:40.124450 6 log.go:172] (0xc001f3e790) (0xc0002f4d20) Stream added, broadcasting: 5 I0805 13:02:40.125518 6 log.go:172] (0xc001f3e790) Reply frame received for 5 I0805 13:02:40.203543 6 log.go:172] (0xc001f3e790) Data frame received for 3 I0805 13:02:40.203578 6 log.go:172] (0xc0018b6a00) (3) Data frame handling I0805 13:02:40.203630 6 log.go:172] (0xc0018b6a00) (3) Data frame sent I0805 13:02:40.204345 6 log.go:172] (0xc001f3e790) Data frame received for 5 I0805 13:02:40.204420 6 log.go:172] (0xc0002f4d20) (5) Data frame handling I0805 13:02:40.205025 6 log.go:172] (0xc001f3e790) Data frame received for 3 I0805 13:02:40.205051 6 log.go:172] (0xc0018b6a00) (3) Data frame handling I0805 13:02:40.206708 6 log.go:172] (0xc001f3e790) Data frame received for 1 I0805 13:02:40.206725 6 log.go:172] (0xc0018b6960) (1) Data frame handling I0805 13:02:40.206733 6 log.go:172] (0xc0018b6960) (1) Data frame sent I0805 13:02:40.206752 6 log.go:172] (0xc001f3e790) (0xc0018b6960) Stream removed, broadcasting: 1 I0805 13:02:40.206764 6 log.go:172] (0xc001f3e790) Go away received I0805 13:02:40.206941 6 log.go:172] (0xc001f3e790) (0xc0018b6960) Stream removed, broadcasting: 1 I0805 13:02:40.206978 6 log.go:172] (0xc001f3e790) (0xc0018b6a00) Stream removed, broadcasting: 3 I0805 13:02:40.206992 6 log.go:172] (0xc001f3e790) (0xc0002f4d20) Stream removed, broadcasting: 5 Aug 5 13:02:40.207: INFO: Waiting for endpoints: map[] Aug 5 13:02:40.212: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.17:8080/dial?request=hostName&protocol=http&host=10.244.1.16&port=8080&tries=1'] Namespace:pod-network-test-981 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 5 13:02:40.212: INFO: >>> kubeConfig: /root/.kube/config I0805 13:02:40.239610 6 log.go:172] (0xc00138cdc0) (0xc002ad0320) Create stream I0805 13:02:40.239644 6 log.go:172] (0xc00138cdc0) (0xc002ad0320) Stream added, broadcasting: 1 I0805 13:02:40.241885 6 log.go:172] (0xc00138cdc0) Reply frame received for 1 I0805 13:02:40.241940 6 log.go:172] (0xc00138cdc0) (0xc0002f4dc0) Create stream I0805 13:02:40.241962 6 log.go:172] (0xc00138cdc0) (0xc0002f4dc0) Stream added, broadcasting: 3 I0805 13:02:40.242702 6 log.go:172] (0xc00138cdc0) Reply frame received for 3 I0805 13:02:40.242733 6 log.go:172] (0xc00138cdc0) (0xc0018b6aa0) Create stream I0805 13:02:40.242742 6 log.go:172] (0xc00138cdc0) (0xc0018b6aa0) Stream added, broadcasting: 5 I0805 13:02:40.243640 6 log.go:172] (0xc00138cdc0) Reply frame received for 5 I0805 13:02:40.316666 6 log.go:172] (0xc00138cdc0) Data frame received for 3 I0805 13:02:40.316693 6 log.go:172] (0xc0002f4dc0) (3) Data frame handling I0805 13:02:40.316711 6 log.go:172] (0xc0002f4dc0) (3) Data frame sent I0805 13:02:40.317230 6 log.go:172] (0xc00138cdc0) Data frame received for 5 I0805 13:02:40.317251 6 log.go:172] (0xc0018b6aa0) (5) Data frame handling I0805 13:02:40.317367 6 log.go:172] (0xc00138cdc0) Data frame received for 3 I0805 13:02:40.317410 6 log.go:172] (0xc0002f4dc0) (3) Data frame handling I0805 13:02:40.318671 6 log.go:172] (0xc00138cdc0) Data frame received for 1 I0805 13:02:40.318697 6 log.go:172] (0xc002ad0320) (1) Data frame handling I0805 13:02:40.318723 6 log.go:172] (0xc002ad0320) (1) Data frame sent I0805 13:02:40.318739 6 log.go:172] (0xc00138cdc0) (0xc002ad0320) Stream removed, broadcasting: 1 I0805 13:02:40.318755 6 log.go:172] (0xc00138cdc0) Go away received I0805 13:02:40.318934 6 log.go:172] (0xc00138cdc0) (0xc002ad0320) Stream removed, broadcasting: 1 I0805 13:02:40.318951 6 log.go:172] (0xc00138cdc0) (0xc0002f4dc0) Stream removed, broadcasting: 3 I0805 13:02:40.318971 6 log.go:172] (0xc00138cdc0) (0xc0018b6aa0) Stream removed, broadcasting: 5 Aug 5 13:02:40.319: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 5 13:02:40.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-981" for this suite. Aug 5 13:03:04.335: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 5 13:03:04.407: INFO: namespace pod-network-test-981 deletion completed in 24.084804738s • [SLOW TEST:48.497 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 5 13:03:04.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Aug 5 13:03:08.755: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 5 13:03:08.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7171" for this suite. Aug 5 13:03:14.886: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 5 13:03:14.957: INFO: namespace container-runtime-7171 deletion completed in 6.130089109s • [SLOW TEST:10.550 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 5 13:03:14.958: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Aug 5 13:03:15.049: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-1140,SelfLink:/api/v1/namespaces/watch-1140/configmaps/e2e-watch-test-resource-version,UID:2c5578ef-bfdd-418c-9d33-0a4f1ce4264d,ResourceVersion:3089367,Generation:0,CreationTimestamp:2020-08-05 13:03:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Aug 5 13:03:15.049: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-1140,SelfLink:/api/v1/namespaces/watch-1140/configmaps/e2e-watch-test-resource-version,UID:2c5578ef-bfdd-418c-9d33-0a4f1ce4264d,ResourceVersion:3089368,Generation:0,CreationTimestamp:2020-08-05 13:03:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 5 13:03:15.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1140" for this suite. Aug 5 13:03:21.098: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 5 13:03:21.176: INFO: namespace watch-1140 deletion completed in 6.123771791s • [SLOW TEST:6.218 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 5 13:03:21.176: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6741.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-6741.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6741.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-6741.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 5 13:03:27.289: INFO: DNS probes using dns-test-f401807f-4788-47ed-8d7a-fef822e04ca3 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6741.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-6741.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6741.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-6741.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 5 13:03:35.398: INFO: File wheezy_udp@dns-test-service-3.dns-6741.svc.cluster.local from pod dns-6741/dns-test-d36684cb-b402-4a5d-a2d4-31a317a5c7db contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 5 13:03:35.401: INFO: File jessie_udp@dns-test-service-3.dns-6741.svc.cluster.local from pod dns-6741/dns-test-d36684cb-b402-4a5d-a2d4-31a317a5c7db contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 5 13:03:35.401: INFO: Lookups using dns-6741/dns-test-d36684cb-b402-4a5d-a2d4-31a317a5c7db failed for: [wheezy_udp@dns-test-service-3.dns-6741.svc.cluster.local jessie_udp@dns-test-service-3.dns-6741.svc.cluster.local] Aug 5 13:03:40.405: INFO: File wheezy_udp@dns-test-service-3.dns-6741.svc.cluster.local from pod dns-6741/dns-test-d36684cb-b402-4a5d-a2d4-31a317a5c7db contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 5 13:03:40.408: INFO: File jessie_udp@dns-test-service-3.dns-6741.svc.cluster.local from pod dns-6741/dns-test-d36684cb-b402-4a5d-a2d4-31a317a5c7db contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 5 13:03:40.408: INFO: Lookups using dns-6741/dns-test-d36684cb-b402-4a5d-a2d4-31a317a5c7db failed for: [wheezy_udp@dns-test-service-3.dns-6741.svc.cluster.local jessie_udp@dns-test-service-3.dns-6741.svc.cluster.local] Aug 5 13:03:45.406: INFO: File wheezy_udp@dns-test-service-3.dns-6741.svc.cluster.local from pod dns-6741/dns-test-d36684cb-b402-4a5d-a2d4-31a317a5c7db contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 5 13:03:45.409: INFO: File jessie_udp@dns-test-service-3.dns-6741.svc.cluster.local from pod dns-6741/dns-test-d36684cb-b402-4a5d-a2d4-31a317a5c7db contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 5 13:03:45.409: INFO: Lookups using dns-6741/dns-test-d36684cb-b402-4a5d-a2d4-31a317a5c7db failed for: [wheezy_udp@dns-test-service-3.dns-6741.svc.cluster.local jessie_udp@dns-test-service-3.dns-6741.svc.cluster.local] Aug 5 13:03:50.407: INFO: File wheezy_udp@dns-test-service-3.dns-6741.svc.cluster.local from pod dns-6741/dns-test-d36684cb-b402-4a5d-a2d4-31a317a5c7db contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 5 13:03:50.411: INFO: File jessie_udp@dns-test-service-3.dns-6741.svc.cluster.local from pod dns-6741/dns-test-d36684cb-b402-4a5d-a2d4-31a317a5c7db contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 5 13:03:50.411: INFO: Lookups using dns-6741/dns-test-d36684cb-b402-4a5d-a2d4-31a317a5c7db failed for: [wheezy_udp@dns-test-service-3.dns-6741.svc.cluster.local jessie_udp@dns-test-service-3.dns-6741.svc.cluster.local] Aug 5 13:03:55.406: INFO: File wheezy_udp@dns-test-service-3.dns-6741.svc.cluster.local from pod dns-6741/dns-test-d36684cb-b402-4a5d-a2d4-31a317a5c7db contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 5 13:03:55.409: INFO: File jessie_udp@dns-test-service-3.dns-6741.svc.cluster.local from pod dns-6741/dns-test-d36684cb-b402-4a5d-a2d4-31a317a5c7db contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 5 13:03:55.409: INFO: Lookups using dns-6741/dns-test-d36684cb-b402-4a5d-a2d4-31a317a5c7db failed for: [wheezy_udp@dns-test-service-3.dns-6741.svc.cluster.local jessie_udp@dns-test-service-3.dns-6741.svc.cluster.local] Aug 5 13:04:00.410: INFO: DNS probes using dns-test-d36684cb-b402-4a5d-a2d4-31a317a5c7db succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6741.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-6741.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-6741.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-6741.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 5 13:04:09.465: INFO: DNS probes using dns-test-c46434fe-27ce-464a-a5d4-48ee6c968b1f succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 5 13:04:09.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6741" for this suite. Aug 5 13:04:19.541: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 5 13:04:19.615: INFO: namespace dns-6741 deletion completed in 10.085938658s • [SLOW TEST:58.438 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 5 13:04:19.615: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-22582535-65c2-4f38-baea-6ac2882157b7 STEP: Creating a pod to test consume configMaps Aug 5 13:04:19.761: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ae501f1e-14e0-44d1-9645-40ef0f9f4373" in namespace "projected-3184" to be "success or failure" Aug 5 13:04:19.783: INFO: Pod "pod-projected-configmaps-ae501f1e-14e0-44d1-9645-40ef0f9f4373": Phase="Pending", Reason="", readiness=false. Elapsed: 21.637226ms Aug 5 13:04:21.787: INFO: Pod "pod-projected-configmaps-ae501f1e-14e0-44d1-9645-40ef0f9f4373": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025130189s Aug 5 13:04:23.791: INFO: Pod "pod-projected-configmaps-ae501f1e-14e0-44d1-9645-40ef0f9f4373": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029489393s Aug 5 13:04:25.795: INFO: Pod "pod-projected-configmaps-ae501f1e-14e0-44d1-9645-40ef0f9f4373": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.033438573s STEP: Saw pod success Aug 5 13:04:25.795: INFO: Pod "pod-projected-configmaps-ae501f1e-14e0-44d1-9645-40ef0f9f4373" satisfied condition "success or failure" Aug 5 13:04:25.798: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-ae501f1e-14e0-44d1-9645-40ef0f9f4373 container projected-configmap-volume-test: STEP: delete the pod Aug 5 13:04:25.952: INFO: Waiting for pod pod-projected-configmaps-ae501f1e-14e0-44d1-9645-40ef0f9f4373 to disappear Aug 5 13:04:26.156: INFO: Pod pod-projected-configmaps-ae501f1e-14e0-44d1-9645-40ef0f9f4373 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 5 13:04:26.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3184" for this suite. Aug 5 13:04:32.235: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 5 13:04:32.304: INFO: namespace projected-3184 deletion completed in 6.143993835s • [SLOW TEST:12.688 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 5 13:04:32.304: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-6882 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-6882 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-6882 Aug 5 13:04:32.406: INFO: Found 0 stateful pods, waiting for 1 Aug 5 13:04:42.423: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Aug 5 13:04:42.430: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6882 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Aug 5 13:04:42.680: INFO: stderr: "I0805 13:04:42.563058 144 log.go:172] (0xc000116dc0) (0xc000802640) Create stream\nI0805 13:04:42.563135 144 log.go:172] (0xc000116dc0) (0xc000802640) Stream added, broadcasting: 1\nI0805 13:04:42.566930 144 log.go:172] (0xc000116dc0) Reply frame received for 1\nI0805 13:04:42.566981 144 log.go:172] (0xc000116dc0) (0xc0001e8460) Create stream\nI0805 13:04:42.567005 144 log.go:172] (0xc000116dc0) (0xc0001e8460) Stream added, broadcasting: 3\nI0805 13:04:42.568238 144 log.go:172] (0xc000116dc0) Reply frame received for 3\nI0805 13:04:42.568284 144 log.go:172] (0xc000116dc0) (0xc000886000) Create stream\nI0805 13:04:42.568299 144 log.go:172] (0xc000116dc0) (0xc000886000) Stream added, broadcasting: 5\nI0805 13:04:42.569339 144 log.go:172] (0xc000116dc0) Reply frame received for 5\nI0805 13:04:42.643357 144 log.go:172] (0xc000116dc0) Data frame received for 5\nI0805 13:04:42.643386 144 log.go:172] (0xc000886000) (5) Data frame handling\nI0805 13:04:42.643398 144 log.go:172] (0xc000886000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0805 13:04:42.671406 144 log.go:172] (0xc000116dc0) Data frame received for 5\nI0805 13:04:42.671440 144 log.go:172] (0xc000886000) (5) Data frame handling\nI0805 13:04:42.671470 144 log.go:172] (0xc000116dc0) Data frame received for 3\nI0805 13:04:42.671485 144 log.go:172] (0xc0001e8460) (3) Data frame handling\nI0805 13:04:42.671497 144 log.go:172] (0xc0001e8460) (3) Data frame sent\nI0805 13:04:42.671515 144 log.go:172] (0xc000116dc0) Data frame received for 3\nI0805 13:04:42.671531 144 log.go:172] (0xc0001e8460) (3) Data frame handling\nI0805 13:04:42.673799 144 log.go:172] (0xc000116dc0) Data frame received for 1\nI0805 13:04:42.673821 144 log.go:172] (0xc000802640) (1) Data frame handling\nI0805 13:04:42.673837 144 log.go:172] (0xc000802640) (1) Data frame sent\nI0805 13:04:42.673848 144 log.go:172] (0xc000116dc0) (0xc000802640) Stream removed, broadcasting: 1\nI0805 13:04:42.673866 144 log.go:172] (0xc000116dc0) Go away received\nI0805 13:04:42.674351 144 log.go:172] (0xc000116dc0) (0xc000802640) Stream removed, broadcasting: 1\nI0805 13:04:42.674381 144 log.go:172] (0xc000116dc0) (0xc0001e8460) Stream removed, broadcasting: 3\nI0805 13:04:42.674392 144 log.go:172] (0xc000116dc0) (0xc000886000) Stream removed, broadcasting: 5\n" Aug 5 13:04:42.680: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Aug 5 13:04:42.680: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Aug 5 13:04:42.684: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Aug 5 13:04:52.688: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Aug 5 13:04:52.689: INFO: Waiting for statefulset status.replicas updated to 0 Aug 5 13:04:52.705: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999639s Aug 5 13:04:53.709: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.991962753s Aug 5 13:04:54.714: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.987729856s Aug 5 13:04:55.719: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.982506727s Aug 5 13:04:56.724: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.977420922s Aug 5 13:04:57.730: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.97297001s Aug 5 13:04:58.742: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.96673348s Aug 5 13:04:59.745: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.955424614s Aug 5 13:05:00.757: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.951923797s Aug 5 13:05:01.761: INFO: Verifying statefulset ss doesn't scale past 1 for another 939.467601ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-6882 Aug 5 13:05:02.766: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6882 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 5 13:05:02.980: INFO: stderr: "I0805 13:05:02.889909 160 log.go:172] (0xc000116dc0) (0xc0005be820) Create stream\nI0805 13:05:02.889973 160 log.go:172] (0xc000116dc0) (0xc0005be820) Stream added, broadcasting: 1\nI0805 13:05:02.893525 160 log.go:172] (0xc000116dc0) Reply frame received for 1\nI0805 13:05:02.893562 160 log.go:172] (0xc000116dc0) (0xc00068a1e0) Create stream\nI0805 13:05:02.893571 160 log.go:172] (0xc000116dc0) (0xc00068a1e0) Stream added, broadcasting: 3\nI0805 13:05:02.897082 160 log.go:172] (0xc000116dc0) Reply frame received for 3\nI0805 13:05:02.897114 160 log.go:172] (0xc000116dc0) (0xc0005be000) Create stream\nI0805 13:05:02.897125 160 log.go:172] (0xc000116dc0) (0xc0005be000) Stream added, broadcasting: 5\nI0805 13:05:02.898089 160 log.go:172] (0xc000116dc0) Reply frame received for 5\nI0805 13:05:02.971933 160 log.go:172] (0xc000116dc0) Data frame received for 3\nI0805 13:05:02.971987 160 log.go:172] (0xc00068a1e0) (3) Data frame handling\nI0805 13:05:02.972009 160 log.go:172] (0xc00068a1e0) (3) Data frame sent\nI0805 13:05:02.972321 160 log.go:172] (0xc000116dc0) Data frame received for 5\nI0805 13:05:02.972346 160 log.go:172] (0xc0005be000) (5) Data frame handling\nI0805 13:05:02.972365 160 log.go:172] (0xc0005be000) (5) Data frame sent\nI0805 13:05:02.972384 160 log.go:172] (0xc000116dc0) Data frame received for 5\nI0805 13:05:02.972393 160 log.go:172] (0xc0005be000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0805 13:05:02.972423 160 log.go:172] (0xc000116dc0) Data frame received for 3\nI0805 13:05:02.972434 160 log.go:172] (0xc00068a1e0) (3) Data frame handling\nI0805 13:05:02.974074 160 log.go:172] (0xc000116dc0) Data frame received for 1\nI0805 13:05:02.974113 160 log.go:172] (0xc0005be820) (1) Data frame handling\nI0805 13:05:02.974148 160 log.go:172] (0xc0005be820) (1) Data frame sent\nI0805 13:05:02.974176 160 log.go:172] (0xc000116dc0) (0xc0005be820) Stream removed, broadcasting: 1\nI0805 13:05:02.974390 160 log.go:172] (0xc000116dc0) Go away received\nI0805 13:05:02.974591 160 log.go:172] (0xc000116dc0) (0xc0005be820) Stream removed, broadcasting: 1\nI0805 13:05:02.974614 160 log.go:172] (0xc000116dc0) (0xc00068a1e0) Stream removed, broadcasting: 3\nI0805 13:05:02.974633 160 log.go:172] (0xc000116dc0) (0xc0005be000) Stream removed, broadcasting: 5\n" Aug 5 13:05:02.980: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Aug 5 13:05:02.980: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Aug 5 13:05:02.984: INFO: Found 1 stateful pods, waiting for 3 Aug 5 13:05:12.989: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Aug 5 13:05:12.989: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Aug 5 13:05:12.989: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Aug 5 13:05:12.995: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6882 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Aug 5 13:05:13.204: INFO: stderr: "I0805 13:05:13.124540 180 log.go:172] (0xc000131080) (0xc0006b6960) Create stream\nI0805 13:05:13.124613 180 log.go:172] (0xc000131080) (0xc0006b6960) Stream added, broadcasting: 1\nI0805 13:05:13.127910 180 log.go:172] (0xc000131080) Reply frame received for 1\nI0805 13:05:13.127957 180 log.go:172] (0xc000131080) (0xc00079e000) Create stream\nI0805 13:05:13.127972 180 log.go:172] (0xc000131080) (0xc00079e000) Stream added, broadcasting: 3\nI0805 13:05:13.129126 180 log.go:172] (0xc000131080) Reply frame received for 3\nI0805 13:05:13.129167 180 log.go:172] (0xc000131080) (0xc0006b61e0) Create stream\nI0805 13:05:13.129178 180 log.go:172] (0xc000131080) (0xc0006b61e0) Stream added, broadcasting: 5\nI0805 13:05:13.130150 180 log.go:172] (0xc000131080) Reply frame received for 5\nI0805 13:05:13.197026 180 log.go:172] (0xc000131080) Data frame received for 5\nI0805 13:05:13.197081 180 log.go:172] (0xc0006b61e0) (5) Data frame handling\nI0805 13:05:13.197101 180 log.go:172] (0xc0006b61e0) (5) Data frame sent\nI0805 13:05:13.197111 180 log.go:172] (0xc000131080) Data frame received for 5\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0805 13:05:13.197121 180 log.go:172] (0xc0006b61e0) (5) Data frame handling\nI0805 13:05:13.197173 180 log.go:172] (0xc000131080) Data frame received for 3\nI0805 13:05:13.197196 180 log.go:172] (0xc00079e000) (3) Data frame handling\nI0805 13:05:13.197214 180 log.go:172] (0xc00079e000) (3) Data frame sent\nI0805 13:05:13.197224 180 log.go:172] (0xc000131080) Data frame received for 3\nI0805 13:05:13.197241 180 log.go:172] (0xc00079e000) (3) Data frame handling\nI0805 13:05:13.198434 180 log.go:172] (0xc000131080) Data frame received for 1\nI0805 13:05:13.198455 180 log.go:172] (0xc0006b6960) (1) Data frame handling\nI0805 13:05:13.198472 180 log.go:172] (0xc0006b6960) (1) Data frame sent\nI0805 13:05:13.198487 180 log.go:172] (0xc000131080) (0xc0006b6960) Stream removed, broadcasting: 1\nI0805 13:05:13.198503 180 log.go:172] (0xc000131080) Go away received\nI0805 13:05:13.198888 180 log.go:172] (0xc000131080) (0xc0006b6960) Stream removed, broadcasting: 1\nI0805 13:05:13.198909 180 log.go:172] (0xc000131080) (0xc00079e000) Stream removed, broadcasting: 3\nI0805 13:05:13.198920 180 log.go:172] (0xc000131080) (0xc0006b61e0) Stream removed, broadcasting: 5\n" Aug 5 13:05:13.204: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Aug 5 13:05:13.204: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Aug 5 13:05:13.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6882 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Aug 5 13:05:13.446: INFO: stderr: "I0805 13:05:13.335627 201 log.go:172] (0xc000116fd0) (0xc0003d2be0) Create stream\nI0805 13:05:13.335681 201 log.go:172] (0xc000116fd0) (0xc0003d2be0) Stream added, broadcasting: 1\nI0805 13:05:13.339055 201 log.go:172] (0xc000116fd0) Reply frame received for 1\nI0805 13:05:13.339125 201 log.go:172] (0xc000116fd0) (0xc000a24000) Create stream\nI0805 13:05:13.339155 201 log.go:172] (0xc000116fd0) (0xc000a24000) Stream added, broadcasting: 3\nI0805 13:05:13.340275 201 log.go:172] (0xc000116fd0) Reply frame received for 3\nI0805 13:05:13.340337 201 log.go:172] (0xc000116fd0) (0xc000794000) Create stream\nI0805 13:05:13.340357 201 log.go:172] (0xc000116fd0) (0xc000794000) Stream added, broadcasting: 5\nI0805 13:05:13.341682 201 log.go:172] (0xc000116fd0) Reply frame received for 5\nI0805 13:05:13.408676 201 log.go:172] (0xc000116fd0) Data frame received for 5\nI0805 13:05:13.408702 201 log.go:172] (0xc000794000) (5) Data frame handling\nI0805 13:05:13.408717 201 log.go:172] (0xc000794000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0805 13:05:13.438707 201 log.go:172] (0xc000116fd0) Data frame received for 5\nI0805 13:05:13.438745 201 log.go:172] (0xc000794000) (5) Data frame handling\nI0805 13:05:13.438776 201 log.go:172] (0xc000116fd0) Data frame received for 3\nI0805 13:05:13.438789 201 log.go:172] (0xc000a24000) (3) Data frame handling\nI0805 13:05:13.438824 201 log.go:172] (0xc000a24000) (3) Data frame sent\nI0805 13:05:13.438845 201 log.go:172] (0xc000116fd0) Data frame received for 3\nI0805 13:05:13.438865 201 log.go:172] (0xc000a24000) (3) Data frame handling\nI0805 13:05:13.441015 201 log.go:172] (0xc000116fd0) Data frame received for 1\nI0805 13:05:13.441052 201 log.go:172] (0xc0003d2be0) (1) Data frame handling\nI0805 13:05:13.441071 201 log.go:172] (0xc0003d2be0) (1) Data frame sent\nI0805 13:05:13.441165 201 log.go:172] (0xc000116fd0) (0xc0003d2be0) Stream removed, broadcasting: 1\nI0805 13:05:13.441207 201 log.go:172] (0xc000116fd0) Go away received\nI0805 13:05:13.441757 201 log.go:172] (0xc000116fd0) (0xc0003d2be0) Stream removed, broadcasting: 1\nI0805 13:05:13.441784 201 log.go:172] (0xc000116fd0) (0xc000a24000) Stream removed, broadcasting: 3\nI0805 13:05:13.441820 201 log.go:172] (0xc000116fd0) (0xc000794000) Stream removed, broadcasting: 5\n" Aug 5 13:05:13.446: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Aug 5 13:05:13.446: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Aug 5 13:05:13.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6882 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Aug 5 13:05:13.750: INFO: stderr: "I0805 13:05:13.585915 223 log.go:172] (0xc0008f6420) (0xc00036e820) Create stream\nI0805 13:05:13.585976 223 log.go:172] (0xc0008f6420) (0xc00036e820) Stream added, broadcasting: 1\nI0805 13:05:13.588992 223 log.go:172] (0xc0008f6420) Reply frame received for 1\nI0805 13:05:13.589036 223 log.go:172] (0xc0008f6420) (0xc00096a000) Create stream\nI0805 13:05:13.589069 223 log.go:172] (0xc0008f6420) (0xc00096a000) Stream added, broadcasting: 3\nI0805 13:05:13.590747 223 log.go:172] (0xc0008f6420) Reply frame received for 3\nI0805 13:05:13.590771 223 log.go:172] (0xc0008f6420) (0xc00036e000) Create stream\nI0805 13:05:13.590781 223 log.go:172] (0xc0008f6420) (0xc00036e000) Stream added, broadcasting: 5\nI0805 13:05:13.591529 223 log.go:172] (0xc0008f6420) Reply frame received for 5\nI0805 13:05:13.657671 223 log.go:172] (0xc0008f6420) Data frame received for 5\nI0805 13:05:13.657696 223 log.go:172] (0xc00036e000) (5) Data frame handling\nI0805 13:05:13.657710 223 log.go:172] (0xc00036e000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0805 13:05:13.741062 223 log.go:172] (0xc0008f6420) Data frame received for 3\nI0805 13:05:13.741100 223 log.go:172] (0xc00096a000) (3) Data frame handling\nI0805 13:05:13.741123 223 log.go:172] (0xc00096a000) (3) Data frame sent\nI0805 13:05:13.742207 223 log.go:172] (0xc0008f6420) Data frame received for 3\nI0805 13:05:13.742236 223 log.go:172] (0xc00096a000) (3) Data frame handling\nI0805 13:05:13.742261 223 log.go:172] (0xc0008f6420) Data frame received for 5\nI0805 13:05:13.742288 223 log.go:172] (0xc00036e000) (5) Data frame handling\nI0805 13:05:13.744002 223 log.go:172] (0xc0008f6420) Data frame received for 1\nI0805 13:05:13.744019 223 log.go:172] (0xc00036e820) (1) Data frame handling\nI0805 13:05:13.744031 223 log.go:172] (0xc00036e820) (1) Data frame sent\nI0805 13:05:13.744225 223 log.go:172] (0xc0008f6420) (0xc00036e820) Stream removed, broadcasting: 1\nI0805 13:05:13.744517 223 log.go:172] (0xc0008f6420) Go away received\nI0805 13:05:13.744566 223 log.go:172] (0xc0008f6420) (0xc00036e820) Stream removed, broadcasting: 1\nI0805 13:05:13.744612 223 log.go:172] (0xc0008f6420) (0xc00096a000) Stream removed, broadcasting: 3\nI0805 13:05:13.744623 223 log.go:172] (0xc0008f6420) (0xc00036e000) Stream removed, broadcasting: 5\n" Aug 5 13:05:13.750: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Aug 5 13:05:13.750: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Aug 5 13:05:13.750: INFO: Waiting for statefulset status.replicas updated to 0 Aug 5 13:05:13.768: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Aug 5 13:05:23.799: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Aug 5 13:05:23.799: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Aug 5 13:05:23.799: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Aug 5 13:05:23.862: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999721s Aug 5 13:05:24.867: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.943842302s Aug 5 13:05:25.873: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.938325989s Aug 5 13:05:26.878: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.932467572s Aug 5 13:05:27.884: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.927313676s Aug 5 13:05:28.891: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.92197442s Aug 5 13:05:29.926: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.91428289s Aug 5 13:05:30.931: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.879327246s Aug 5 13:05:31.937: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.874181369s Aug 5 13:05:32.941: INFO: Verifying statefulset ss doesn't scale past 3 for another 868.83514ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-6882 Aug 5 13:05:33.947: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6882 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 5 13:05:34.149: INFO: stderr: "I0805 13:05:34.074180 243 log.go:172] (0xc000418420) (0xc0006ac6e0) Create stream\nI0805 13:05:34.074242 243 log.go:172] (0xc000418420) (0xc0006ac6e0) Stream added, broadcasting: 1\nI0805 13:05:34.076683 243 log.go:172] (0xc000418420) Reply frame received for 1\nI0805 13:05:34.078208 243 log.go:172] (0xc000418420) (0xc000a66000) Create stream\nI0805 13:05:34.078512 243 log.go:172] (0xc000418420) (0xc000a66000) Stream added, broadcasting: 3\nI0805 13:05:34.079598 243 log.go:172] (0xc000418420) Reply frame received for 3\nI0805 13:05:34.079638 243 log.go:172] (0xc000418420) (0xc000a660a0) Create stream\nI0805 13:05:34.079653 243 log.go:172] (0xc000418420) (0xc000a660a0) Stream added, broadcasting: 5\nI0805 13:05:34.080461 243 log.go:172] (0xc000418420) Reply frame received for 5\nI0805 13:05:34.141592 243 log.go:172] (0xc000418420) Data frame received for 5\nI0805 13:05:34.141638 243 log.go:172] (0xc000a660a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0805 13:05:34.141668 243 log.go:172] (0xc000418420) Data frame received for 3\nI0805 13:05:34.141718 243 log.go:172] (0xc000a66000) (3) Data frame handling\nI0805 13:05:34.141753 243 log.go:172] (0xc000a66000) (3) Data frame sent\nI0805 13:05:34.141771 243 log.go:172] (0xc000418420) Data frame received for 3\nI0805 13:05:34.141799 243 log.go:172] (0xc000a66000) (3) Data frame handling\nI0805 13:05:34.141827 243 log.go:172] (0xc000a660a0) (5) Data frame sent\nI0805 13:05:34.141855 243 log.go:172] (0xc000418420) Data frame received for 5\nI0805 13:05:34.141865 243 log.go:172] (0xc000a660a0) (5) Data frame handling\nI0805 13:05:34.143304 243 log.go:172] (0xc000418420) Data frame received for 1\nI0805 13:05:34.143334 243 log.go:172] (0xc0006ac6e0) (1) Data frame handling\nI0805 13:05:34.143371 243 log.go:172] (0xc0006ac6e0) (1) Data frame sent\nI0805 13:05:34.143399 243 log.go:172] (0xc000418420) (0xc0006ac6e0) Stream removed, broadcasting: 1\nI0805 13:05:34.143426 243 log.go:172] (0xc000418420) Go away received\nI0805 13:05:34.143866 243 log.go:172] (0xc000418420) (0xc0006ac6e0) Stream removed, broadcasting: 1\nI0805 13:05:34.143889 243 log.go:172] (0xc000418420) (0xc000a66000) Stream removed, broadcasting: 3\nI0805 13:05:34.143901 243 log.go:172] (0xc000418420) (0xc000a660a0) Stream removed, broadcasting: 5\n" Aug 5 13:05:34.149: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Aug 5 13:05:34.149: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Aug 5 13:05:34.149: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6882 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 5 13:05:34.351: INFO: stderr: "I0805 13:05:34.278468 264 log.go:172] (0xc0008d6420) (0xc0002a4820) Create stream\nI0805 13:05:34.278533 264 log.go:172] (0xc0008d6420) (0xc0002a4820) Stream added, broadcasting: 1\nI0805 13:05:34.280567 264 log.go:172] (0xc0008d6420) Reply frame received for 1\nI0805 13:05:34.280634 264 log.go:172] (0xc0008d6420) (0xc00092e000) Create stream\nI0805 13:05:34.280659 264 log.go:172] (0xc0008d6420) (0xc00092e000) Stream added, broadcasting: 3\nI0805 13:05:34.281715 264 log.go:172] (0xc0008d6420) Reply frame received for 3\nI0805 13:05:34.281759 264 log.go:172] (0xc0008d6420) (0xc0002a48c0) Create stream\nI0805 13:05:34.281772 264 log.go:172] (0xc0008d6420) (0xc0002a48c0) Stream added, broadcasting: 5\nI0805 13:05:34.282553 264 log.go:172] (0xc0008d6420) Reply frame received for 5\nI0805 13:05:34.344414 264 log.go:172] (0xc0008d6420) Data frame received for 5\nI0805 13:05:34.344449 264 log.go:172] (0xc0002a48c0) (5) Data frame handling\nI0805 13:05:34.344459 264 log.go:172] (0xc0002a48c0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0805 13:05:34.344485 264 log.go:172] (0xc0008d6420) Data frame received for 3\nI0805 13:05:34.344531 264 log.go:172] (0xc00092e000) (3) Data frame handling\nI0805 13:05:34.344545 264 log.go:172] (0xc00092e000) (3) Data frame sent\nI0805 13:05:34.344561 264 log.go:172] (0xc0008d6420) Data frame received for 3\nI0805 13:05:34.344569 264 log.go:172] (0xc00092e000) (3) Data frame handling\nI0805 13:05:34.344605 264 log.go:172] (0xc0008d6420) Data frame received for 5\nI0805 13:05:34.344633 264 log.go:172] (0xc0002a48c0) (5) Data frame handling\nI0805 13:05:34.345725 264 log.go:172] (0xc0008d6420) Data frame received for 1\nI0805 13:05:34.345747 264 log.go:172] (0xc0002a4820) (1) Data frame handling\nI0805 13:05:34.345761 264 log.go:172] (0xc0002a4820) (1) Data frame sent\nI0805 13:05:34.345915 264 log.go:172] (0xc0008d6420) (0xc0002a4820) Stream removed, broadcasting: 1\nI0805 13:05:34.345938 264 log.go:172] (0xc0008d6420) Go away received\nI0805 13:05:34.346278 264 log.go:172] (0xc0008d6420) (0xc0002a4820) Stream removed, broadcasting: 1\nI0805 13:05:34.346294 264 log.go:172] (0xc0008d6420) (0xc00092e000) Stream removed, broadcasting: 3\nI0805 13:05:34.346301 264 log.go:172] (0xc0008d6420) (0xc0002a48c0) Stream removed, broadcasting: 5\n" Aug 5 13:05:34.351: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Aug 5 13:05:34.351: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Aug 5 13:05:34.351: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6882 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 5 13:05:34.546: INFO: stderr: "I0805 13:05:34.479726 284 log.go:172] (0xc000116f20) (0xc00027a820) Create stream\nI0805 13:05:34.479804 284 log.go:172] (0xc000116f20) (0xc00027a820) Stream added, broadcasting: 1\nI0805 13:05:34.482652 284 log.go:172] (0xc000116f20) Reply frame received for 1\nI0805 13:05:34.482686 284 log.go:172] (0xc000116f20) (0xc00027a8c0) Create stream\nI0805 13:05:34.482698 284 log.go:172] (0xc000116f20) (0xc00027a8c0) Stream added, broadcasting: 3\nI0805 13:05:34.483678 284 log.go:172] (0xc000116f20) Reply frame received for 3\nI0805 13:05:34.483712 284 log.go:172] (0xc000116f20) (0xc00027a960) Create stream\nI0805 13:05:34.483725 284 log.go:172] (0xc000116f20) (0xc00027a960) Stream added, broadcasting: 5\nI0805 13:05:34.484703 284 log.go:172] (0xc000116f20) Reply frame received for 5\nI0805 13:05:34.538604 284 log.go:172] (0xc000116f20) Data frame received for 3\nI0805 13:05:34.538649 284 log.go:172] (0xc00027a8c0) (3) Data frame handling\nI0805 13:05:34.538673 284 log.go:172] (0xc000116f20) Data frame received for 5\nI0805 13:05:34.538694 284 log.go:172] (0xc00027a960) (5) Data frame handling\nI0805 13:05:34.538708 284 log.go:172] (0xc00027a960) (5) Data frame sent\nI0805 13:05:34.538720 284 log.go:172] (0xc000116f20) Data frame received for 5\nI0805 13:05:34.538732 284 log.go:172] (0xc00027a960) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0805 13:05:34.538744 284 log.go:172] (0xc00027a8c0) (3) Data frame sent\nI0805 13:05:34.538754 284 log.go:172] (0xc000116f20) Data frame received for 3\nI0805 13:05:34.538762 284 log.go:172] (0xc00027a8c0) (3) Data frame handling\nI0805 13:05:34.540202 284 log.go:172] (0xc000116f20) Data frame received for 1\nI0805 13:05:34.540230 284 log.go:172] (0xc00027a820) (1) Data frame handling\nI0805 13:05:34.540248 284 log.go:172] (0xc00027a820) (1) Data frame sent\nI0805 13:05:34.540442 284 log.go:172] (0xc000116f20) (0xc00027a820) Stream removed, broadcasting: 1\nI0805 13:05:34.540476 284 log.go:172] (0xc000116f20) Go away received\nI0805 13:05:34.541092 284 log.go:172] (0xc000116f20) (0xc00027a820) Stream removed, broadcasting: 1\nI0805 13:05:34.541118 284 log.go:172] (0xc000116f20) (0xc00027a8c0) Stream removed, broadcasting: 3\nI0805 13:05:34.541131 284 log.go:172] (0xc000116f20) (0xc00027a960) Stream removed, broadcasting: 5\n" Aug 5 13:05:34.546: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Aug 5 13:05:34.546: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Aug 5 13:05:34.546: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Aug 5 13:06:04.565: INFO: Deleting all statefulset in ns statefulset-6882 Aug 5 13:06:04.568: INFO: Scaling statefulset ss to 0 Aug 5 13:06:04.577: INFO: Waiting for statefulset status.replicas updated to 0 Aug 5 13:06:04.580: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 5 13:06:04.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6882" for this suite. Aug 5 13:06:10.627: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 5 13:06:10.710: INFO: namespace statefulset-6882 deletion completed in 6.117211633s • [SLOW TEST:98.407 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 5 13:06:10.711: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-4300 STEP: creating a selector STEP: Creating the service pods in kubernetes Aug 5 13:06:10.753: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Aug 5 13:06:36.854: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.71 8081 | grep -v '^\s*$'] Namespace:pod-network-test-4300 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 5 13:06:36.854: INFO: >>> kubeConfig: /root/.kube/config I0805 13:06:36.890294 6 log.go:172] (0xc001f38420) (0xc002226640) Create stream I0805 13:06:36.890319 6 log.go:172] (0xc001f38420) (0xc002226640) Stream added, broadcasting: 1 I0805 13:06:36.892010 6 log.go:172] (0xc001f38420) Reply frame received for 1 I0805 13:06:36.892052 6 log.go:172] (0xc001f38420) (0xc0001c8320) Create stream I0805 13:06:36.892066 6 log.go:172] (0xc001f38420) (0xc0001c8320) Stream added, broadcasting: 3 I0805 13:06:36.893153 6 log.go:172] (0xc001f38420) Reply frame received for 3 I0805 13:06:36.893195 6 log.go:172] (0xc001f38420) (0xc0022266e0) Create stream I0805 13:06:36.893203 6 log.go:172] (0xc001f38420) (0xc0022266e0) Stream added, broadcasting: 5 I0805 13:06:36.894055 6 log.go:172] (0xc001f38420) Reply frame received for 5 I0805 13:06:37.950668 6 log.go:172] (0xc001f38420) Data frame received for 3 I0805 13:06:37.950717 6 log.go:172] (0xc0001c8320) (3) Data frame handling I0805 13:06:37.950747 6 log.go:172] (0xc0001c8320) (3) Data frame sent I0805 13:06:37.950769 6 log.go:172] (0xc001f38420) Data frame received for 5 I0805 13:06:37.950789 6 log.go:172] (0xc0022266e0) (5) Data frame handling I0805 13:06:37.950829 6 log.go:172] (0xc001f38420) Data frame received for 3 I0805 13:06:37.950878 6 log.go:172] (0xc0001c8320) (3) Data frame handling I0805 13:06:37.953776 6 log.go:172] (0xc001f38420) Data frame received for 1 I0805 13:06:37.953820 6 log.go:172] (0xc002226640) (1) Data frame handling I0805 13:06:37.953858 6 log.go:172] (0xc002226640) (1) Data frame sent I0805 13:06:37.953890 6 log.go:172] (0xc001f38420) (0xc002226640) Stream removed, broadcasting: 1 I0805 13:06:37.953922 6 log.go:172] (0xc001f38420) Go away received I0805 13:06:37.954137 6 log.go:172] (0xc001f38420) (0xc002226640) Stream removed, broadcasting: 1 I0805 13:06:37.954172 6 log.go:172] (0xc001f38420) (0xc0001c8320) Stream removed, broadcasting: 3 I0805 13:06:37.954197 6 log.go:172] (0xc001f38420) (0xc0022266e0) Stream removed, broadcasting: 5 Aug 5 13:06:37.954: INFO: Found all expected endpoints: [netserver-0] Aug 5 13:06:37.958: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.24 8081 | grep -v '^\s*$'] Namespace:pod-network-test-4300 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 5 13:06:37.958: INFO: >>> kubeConfig: /root/.kube/config I0805 13:06:37.994560 6 log.go:172] (0xc002096b00) (0xc001e015e0) Create stream I0805 13:06:37.994579 6 log.go:172] (0xc002096b00) (0xc001e015e0) Stream added, broadcasting: 1 I0805 13:06:37.997173 6 log.go:172] (0xc002096b00) Reply frame received for 1 I0805 13:06:37.997218 6 log.go:172] (0xc002096b00) (0xc0001c83c0) Create stream I0805 13:06:37.997236 6 log.go:172] (0xc002096b00) (0xc0001c83c0) Stream added, broadcasting: 3 I0805 13:06:37.998243 6 log.go:172] (0xc002096b00) Reply frame received for 3 I0805 13:06:37.998289 6 log.go:172] (0xc002096b00) (0xc002361ea0) Create stream I0805 13:06:37.998305 6 log.go:172] (0xc002096b00) (0xc002361ea0) Stream added, broadcasting: 5 I0805 13:06:37.999277 6 log.go:172] (0xc002096b00) Reply frame received for 5 I0805 13:06:39.056789 6 log.go:172] (0xc002096b00) Data frame received for 3 I0805 13:06:39.056827 6 log.go:172] (0xc0001c83c0) (3) Data frame handling I0805 13:06:39.056840 6 log.go:172] (0xc0001c83c0) (3) Data frame sent I0805 13:06:39.056874 6 log.go:172] (0xc002096b00) Data frame received for 5 I0805 13:06:39.056957 6 log.go:172] (0xc002361ea0) (5) Data frame handling I0805 13:06:39.056991 6 log.go:172] (0xc002096b00) Data frame received for 3 I0805 13:06:39.057005 6 log.go:172] (0xc0001c83c0) (3) Data frame handling I0805 13:06:39.058527 6 log.go:172] (0xc002096b00) Data frame received for 1 I0805 13:06:39.058567 6 log.go:172] (0xc001e015e0) (1) Data frame handling I0805 13:06:39.058626 6 log.go:172] (0xc001e015e0) (1) Data frame sent I0805 13:06:39.058672 6 log.go:172] (0xc002096b00) (0xc001e015e0) Stream removed, broadcasting: 1 I0805 13:06:39.058697 6 log.go:172] (0xc002096b00) Go away received I0805 13:06:39.058845 6 log.go:172] (0xc002096b00) (0xc001e015e0) Stream removed, broadcasting: 1 I0805 13:06:39.058875 6 log.go:172] (0xc002096b00) (0xc0001c83c0) Stream removed, broadcasting: 3 I0805 13:06:39.058893 6 log.go:172] (0xc002096b00) (0xc002361ea0) Stream removed, broadcasting: 5 Aug 5 13:06:39.058: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 5 13:06:39.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-4300" for this suite. Aug 5 13:07:03.090: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 5 13:07:03.158: INFO: namespace pod-network-test-4300 deletion completed in 24.094864254s • [SLOW TEST:52.447 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 5 13:07:03.158: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-51e5a312-7468-4fe2-8b5b-c127702ce5f4 STEP: Creating a pod to test consume configMaps Aug 5 13:07:03.275: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-37796dd1-6ecf-434c-899c-8aff5bbd4ea7" in namespace "projected-7096" to be "success or failure" Aug 5 13:07:03.279: INFO: Pod "pod-projected-configmaps-37796dd1-6ecf-434c-899c-8aff5bbd4ea7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.384514ms Aug 5 13:07:05.305: INFO: Pod "pod-projected-configmaps-37796dd1-6ecf-434c-899c-8aff5bbd4ea7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029314633s Aug 5 13:07:07.308: INFO: Pod "pod-projected-configmaps-37796dd1-6ecf-434c-899c-8aff5bbd4ea7": Phase="Running", Reason="", readiness=true. Elapsed: 4.032930491s Aug 5 13:07:09.312: INFO: Pod "pod-projected-configmaps-37796dd1-6ecf-434c-899c-8aff5bbd4ea7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.036335456s STEP: Saw pod success Aug 5 13:07:09.312: INFO: Pod "pod-projected-configmaps-37796dd1-6ecf-434c-899c-8aff5bbd4ea7" satisfied condition "success or failure" Aug 5 13:07:09.314: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-37796dd1-6ecf-434c-899c-8aff5bbd4ea7 container projected-configmap-volume-test: STEP: delete the pod Aug 5 13:07:09.338: INFO: Waiting for pod pod-projected-configmaps-37796dd1-6ecf-434c-899c-8aff5bbd4ea7 to disappear Aug 5 13:07:09.351: INFO: Pod pod-projected-configmaps-37796dd1-6ecf-434c-899c-8aff5bbd4ea7 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 5 13:07:09.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7096" for this suite. Aug 5 13:07:15.372: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 5 13:07:15.449: INFO: namespace projected-7096 deletion completed in 6.094712477s • [SLOW TEST:12.291 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 5 13:07:15.450: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating server pod server in namespace prestop-4437 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-4437 STEP: Deleting pre-stop pod Aug 5 13:07:28.590: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 5 13:07:28.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-4437" for this suite. Aug 5 13:08:06.629: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 5 13:08:06.704: INFO: namespace prestop-4437 deletion completed in 38.101371229s • [SLOW TEST:51.254 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 5 13:08:06.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-4974 STEP: creating a selector STEP: Creating the service pods in kubernetes Aug 5 13:08:06.770: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Aug 5 13:08:34.870: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.30:8080/dial?request=hostName&protocol=udp&host=10.244.2.76&port=8081&tries=1'] Namespace:pod-network-test-4974 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 5 13:08:34.870: INFO: >>> kubeConfig: /root/.kube/config I0805 13:08:34.907059 6 log.go:172] (0xc002932790) (0xc002c90500) Create stream I0805 13:08:34.907090 6 log.go:172] (0xc002932790) (0xc002c90500) Stream added, broadcasting: 1 I0805 13:08:34.909121 6 log.go:172] (0xc002932790) Reply frame received for 1 I0805 13:08:34.909181 6 log.go:172] (0xc002932790) (0xc001fa5a40) Create stream I0805 13:08:34.909206 6 log.go:172] (0xc002932790) (0xc001fa5a40) Stream added, broadcasting: 3 I0805 13:08:34.910279 6 log.go:172] (0xc002932790) Reply frame received for 3 I0805 13:08:34.910311 6 log.go:172] (0xc002932790) (0xc0010641e0) Create stream I0805 13:08:34.910323 6 log.go:172] (0xc002932790) (0xc0010641e0) Stream added, broadcasting: 5 I0805 13:08:34.911310 6 log.go:172] (0xc002932790) Reply frame received for 5 I0805 13:08:35.004164 6 log.go:172] (0xc002932790) Data frame received for 3 I0805 13:08:35.004221 6 log.go:172] (0xc001fa5a40) (3) Data frame handling I0805 13:08:35.004245 6 log.go:172] (0xc001fa5a40) (3) Data frame sent I0805 13:08:35.004272 6 log.go:172] (0xc002932790) Data frame received for 3 I0805 13:08:35.004283 6 log.go:172] (0xc001fa5a40) (3) Data frame handling I0805 13:08:35.004833 6 log.go:172] (0xc002932790) Data frame received for 5 I0805 13:08:35.004863 6 log.go:172] (0xc0010641e0) (5) Data frame handling I0805 13:08:35.006559 6 log.go:172] (0xc002932790) Data frame received for 1 I0805 13:08:35.006597 6 log.go:172] (0xc002c90500) (1) Data frame handling I0805 13:08:35.006614 6 log.go:172] (0xc002c90500) (1) Data frame sent I0805 13:08:35.006631 6 log.go:172] (0xc002932790) (0xc002c90500) Stream removed, broadcasting: 1 I0805 13:08:35.006649 6 log.go:172] (0xc002932790) Go away received I0805 13:08:35.006829 6 log.go:172] (0xc002932790) (0xc002c90500) Stream removed, broadcasting: 1 I0805 13:08:35.006864 6 log.go:172] (0xc002932790) (0xc001fa5a40) Stream removed, broadcasting: 3 I0805 13:08:35.006889 6 log.go:172] (0xc002932790) (0xc0010641e0) Stream removed, broadcasting: 5 Aug 5 13:08:35.006: INFO: Waiting for endpoints: map[] Aug 5 13:08:35.010: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.30:8080/dial?request=hostName&protocol=udp&host=10.244.1.28&port=8081&tries=1'] Namespace:pod-network-test-4974 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 5 13:08:35.010: INFO: >>> kubeConfig: /root/.kube/config I0805 13:08:35.040902 6 log.go:172] (0xc002f4f550) (0xc0016f0e60) Create stream I0805 13:08:35.040938 6 log.go:172] (0xc002f4f550) (0xc0016f0e60) Stream added, broadcasting: 1 I0805 13:08:35.043043 6 log.go:172] (0xc002f4f550) Reply frame received for 1 I0805 13:08:35.043093 6 log.go:172] (0xc002f4f550) (0xc0010643c0) Create stream I0805 13:08:35.043107 6 log.go:172] (0xc002f4f550) (0xc0010643c0) Stream added, broadcasting: 3 I0805 13:08:35.044097 6 log.go:172] (0xc002f4f550) Reply frame received for 3 I0805 13:08:35.044137 6 log.go:172] (0xc002f4f550) (0xc001fa5ea0) Create stream I0805 13:08:35.044153 6 log.go:172] (0xc002f4f550) (0xc001fa5ea0) Stream added, broadcasting: 5 I0805 13:08:35.045251 6 log.go:172] (0xc002f4f550) Reply frame received for 5 I0805 13:08:35.117535 6 log.go:172] (0xc002f4f550) Data frame received for 3 I0805 13:08:35.117578 6 log.go:172] (0xc0010643c0) (3) Data frame handling I0805 13:08:35.117605 6 log.go:172] (0xc0010643c0) (3) Data frame sent I0805 13:08:35.118459 6 log.go:172] (0xc002f4f550) Data frame received for 5 I0805 13:08:35.118502 6 log.go:172] (0xc001fa5ea0) (5) Data frame handling I0805 13:08:35.118578 6 log.go:172] (0xc002f4f550) Data frame received for 3 I0805 13:08:35.118608 6 log.go:172] (0xc0010643c0) (3) Data frame handling I0805 13:08:35.120011 6 log.go:172] (0xc002f4f550) Data frame received for 1 I0805 13:08:35.120034 6 log.go:172] (0xc0016f0e60) (1) Data frame handling I0805 13:08:35.120050 6 log.go:172] (0xc0016f0e60) (1) Data frame sent I0805 13:08:35.120204 6 log.go:172] (0xc002f4f550) (0xc0016f0e60) Stream removed, broadcasting: 1 I0805 13:08:35.120244 6 log.go:172] (0xc002f4f550) Go away received I0805 13:08:35.120332 6 log.go:172] (0xc002f4f550) (0xc0016f0e60) Stream removed, broadcasting: 1 I0805 13:08:35.120351 6 log.go:172] (0xc002f4f550) (0xc0010643c0) Stream removed, broadcasting: 3 I0805 13:08:35.120363 6 log.go:172] (0xc002f4f550) (0xc001fa5ea0) Stream removed, broadcasting: 5 Aug 5 13:08:35.120: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 5 13:08:35.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-4974" for this suite. Aug 5 13:08:59.141: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 5 13:08:59.217: INFO: namespace pod-network-test-4974 deletion completed in 24.092140941s • [SLOW TEST:52.513 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 5 13:08:59.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-9f93cf61-7c25-4aa8-8122-df12a6f2ee0d STEP: Creating a pod to test consume configMaps Aug 5 13:08:59.284: INFO: Waiting up to 5m0s for pod "pod-configmaps-7fd3785f-eb0d-403e-83a8-67c809a58c95" in namespace "configmap-7069" to be "success or failure" Aug 5 13:08:59.304: INFO: Pod "pod-configmaps-7fd3785f-eb0d-403e-83a8-67c809a58c95": Phase="Pending", Reason="", readiness=false. Elapsed: 19.400541ms Aug 5 13:09:01.307: INFO: Pod "pod-configmaps-7fd3785f-eb0d-403e-83a8-67c809a58c95": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023176806s Aug 5 13:09:03.311: INFO: Pod "pod-configmaps-7fd3785f-eb0d-403e-83a8-67c809a58c95": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027348471s STEP: Saw pod success Aug 5 13:09:03.312: INFO: Pod "pod-configmaps-7fd3785f-eb0d-403e-83a8-67c809a58c95" satisfied condition "success or failure" Aug 5 13:09:03.315: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-7fd3785f-eb0d-403e-83a8-67c809a58c95 container configmap-volume-test: STEP: delete the pod Aug 5 13:09:03.337: INFO: Waiting for pod pod-configmaps-7fd3785f-eb0d-403e-83a8-67c809a58c95 to disappear Aug 5 13:09:03.357: INFO: Pod pod-configmaps-7fd3785f-eb0d-403e-83a8-67c809a58c95 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 5 13:09:03.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7069" for this suite. Aug 5 13:09:09.375: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 5 13:09:09.456: INFO: namespace configmap-7069 deletion completed in 6.090900665s • [SLOW TEST:10.239 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 5 13:09:09.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0805 13:09:19.544975 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Aug 5 13:09:19.545: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 5 13:09:19.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2395" for this suite. Aug 5 13:09:25.559: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 5 13:09:25.637: INFO: namespace gc-2395 deletion completed in 6.089253657s • [SLOW TEST:16.181 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 5 13:09:25.637: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Aug 5 13:09:25.692: INFO: Waiting up to 5m0s for pod "downwardapi-volume-db06253e-d5b9-4c74-99cf-17b5d20d1b53" in namespace "downward-api-6115" to be "success or failure" Aug 5 13:09:25.705: INFO: Pod "downwardapi-volume-db06253e-d5b9-4c74-99cf-17b5d20d1b53": Phase="Pending", Reason="", readiness=false. Elapsed: 12.596402ms Aug 5 13:09:27.710: INFO: Pod "downwardapi-volume-db06253e-d5b9-4c74-99cf-17b5d20d1b53": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017007169s Aug 5 13:09:29.714: INFO: Pod "downwardapi-volume-db06253e-d5b9-4c74-99cf-17b5d20d1b53": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021816303s STEP: Saw pod success Aug 5 13:09:29.714: INFO: Pod "downwardapi-volume-db06253e-d5b9-4c74-99cf-17b5d20d1b53" satisfied condition "success or failure" Aug 5 13:09:29.718: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-db06253e-d5b9-4c74-99cf-17b5d20d1b53 container client-container: STEP: delete the pod Aug 5 13:09:29.805: INFO: Waiting for pod downwardapi-volume-db06253e-d5b9-4c74-99cf-17b5d20d1b53 to disappear Aug 5 13:09:29.864: INFO: Pod downwardapi-volume-db06253e-d5b9-4c74-99cf-17b5d20d1b53 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 5 13:09:29.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6115" for this suite. Aug 5 13:09:35.891: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 5 13:09:35.968: INFO: namespace downward-api-6115 deletion completed in 6.100423137s • [SLOW TEST:10.331 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 5 13:09:35.969: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating replication controller my-hostname-basic-df4a38a2-c682-4685-b4f0-663eb2e21a4e Aug 5 13:09:36.038: INFO: Pod name my-hostname-basic-df4a38a2-c682-4685-b4f0-663eb2e21a4e: Found 0 pods out of 1 Aug 5 13:09:41.043: INFO: Pod name my-hostname-basic-df4a38a2-c682-4685-b4f0-663eb2e21a4e: Found 1 pods out of 1 Aug 5 13:09:41.043: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-df4a38a2-c682-4685-b4f0-663eb2e21a4e" are running Aug 5 13:09:41.046: INFO: Pod "my-hostname-basic-df4a38a2-c682-4685-b4f0-663eb2e21a4e-wlvqj" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-05 13:09:36 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-05 13:09:39 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-05 13:09:39 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-05 13:09:36 +0000 UTC Reason: Message:}]) Aug 5 13:09:41.046: INFO: Trying to dial the pod Aug 5 13:09:46.058: INFO: Controller my-hostname-basic-df4a38a2-c682-4685-b4f0-663eb2e21a4e: Got expected result from replica 1 [my-hostname-basic-df4a38a2-c682-4685-b4f0-663eb2e21a4e-wlvqj]: "my-hostname-basic-df4a38a2-c682-4685-b4f0-663eb2e21a4e-wlvqj", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 5 13:09:46.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-956" for this suite. Aug 5 13:09:52.074: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 5 13:09:52.172: INFO: namespace replication-controller-956 deletion completed in 6.109565858s • [SLOW TEST:16.203 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 5 13:09:52.172: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Aug 5 13:09:52.687: INFO: Waiting up to 5m0s for pod "downwardapi-volume-317aa998-b4ff-4894-8bdb-4bb5e44c0833" in namespace "downward-api-732" to be "success or failure" Aug 5 13:09:52.914: INFO: Pod "downwardapi-volume-317aa998-b4ff-4894-8bdb-4bb5e44c0833": Phase="Pending", Reason="", readiness=false. Elapsed: 226.768575ms Aug 5 13:09:54.918: INFO: Pod "downwardapi-volume-317aa998-b4ff-4894-8bdb-4bb5e44c0833": Phase="Pending", Reason="", readiness=false. Elapsed: 2.230378328s Aug 5 13:09:56.923: INFO: Pod "downwardapi-volume-317aa998-b4ff-4894-8bdb-4bb5e44c0833": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.235063056s STEP: Saw pod success Aug 5 13:09:56.923: INFO: Pod "downwardapi-volume-317aa998-b4ff-4894-8bdb-4bb5e44c0833" satisfied condition "success or failure" Aug 5 13:09:56.926: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-317aa998-b4ff-4894-8bdb-4bb5e44c0833 container client-container: STEP: delete the pod Aug 5 13:09:56.946: INFO: Waiting for pod downwardapi-volume-317aa998-b4ff-4894-8bdb-4bb5e44c0833 to disappear Aug 5 13:09:56.990: INFO: Pod downwardapi-volume-317aa998-b4ff-4894-8bdb-4bb5e44c0833 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 5 13:09:56.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-732" for this suite. Aug 5 13:10:03.270: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 5 13:10:03.371: INFO: namespace downward-api-732 deletion completed in 6.376060857s • [SLOW TEST:11.199 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 5 13:10:03.372: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Aug 5 13:10:03.423: INFO: Waiting up to 5m0s for pod "downward-api-38e01103-451a-46df-bf75-62e1e875ffef" in namespace "downward-api-3304" to be "success or failure" Aug 5 13:10:03.469: INFO: Pod "downward-api-38e01103-451a-46df-bf75-62e1e875ffef": Phase="Pending", Reason="", readiness=false. Elapsed: 45.327339ms Aug 5 13:10:05.511: INFO: Pod "downward-api-38e01103-451a-46df-bf75-62e1e875ffef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087457039s Aug 5 13:10:07.515: INFO: Pod "downward-api-38e01103-451a-46df-bf75-62e1e875ffef": Phase="Running", Reason="", readiness=true. Elapsed: 4.091696971s Aug 5 13:10:09.520: INFO: Pod "downward-api-38e01103-451a-46df-bf75-62e1e875ffef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.096463834s STEP: Saw pod success Aug 5 13:10:09.520: INFO: Pod "downward-api-38e01103-451a-46df-bf75-62e1e875ffef" satisfied condition "success or failure" Aug 5 13:10:09.522: INFO: Trying to get logs from node iruya-worker2 pod downward-api-38e01103-451a-46df-bf75-62e1e875ffef container dapi-container: STEP: delete the pod Aug 5 13:10:09.557: INFO: Waiting for pod downward-api-38e01103-451a-46df-bf75-62e1e875ffef to disappear Aug 5 13:10:09.568: INFO: Pod downward-api-38e01103-451a-46df-bf75-62e1e875ffef no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 5 13:10:09.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3304" for this suite. Aug 5 13:10:15.583: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 5 13:10:15.664: INFO: namespace downward-api-3304 deletion completed in 6.092567564s • [SLOW TEST:12.293 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 5 13:10:15.665: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Aug 5 13:10:15.737: INFO: (0) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 13:10:26.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-3067" for this suite.
Aug  5 13:11:12.025: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:11:12.096: INFO: namespace kubelet-test-3067 deletion completed in 46.090140409s

• [SLOW TEST:50.177 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 13:11:12.096: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Aug  5 13:11:12.152: INFO: Waiting up to 5m0s for pod "pod-b3c7c695-5036-436b-bc8f-cac1a4000f8f" in namespace "emptydir-2169" to be "success or failure"
Aug  5 13:11:12.162: INFO: Pod "pod-b3c7c695-5036-436b-bc8f-cac1a4000f8f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.472367ms
Aug  5 13:11:14.447: INFO: Pod "pod-b3c7c695-5036-436b-bc8f-cac1a4000f8f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.295747362s
Aug  5 13:11:16.452: INFO: Pod "pod-b3c7c695-5036-436b-bc8f-cac1a4000f8f": Phase="Running", Reason="", readiness=true. Elapsed: 4.300095467s
Aug  5 13:11:18.456: INFO: Pod "pod-b3c7c695-5036-436b-bc8f-cac1a4000f8f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.304219949s
STEP: Saw pod success
Aug  5 13:11:18.456: INFO: Pod "pod-b3c7c695-5036-436b-bc8f-cac1a4000f8f" satisfied condition "success or failure"
Aug  5 13:11:18.459: INFO: Trying to get logs from node iruya-worker2 pod pod-b3c7c695-5036-436b-bc8f-cac1a4000f8f container test-container: 
STEP: delete the pod
Aug  5 13:11:18.476: INFO: Waiting for pod pod-b3c7c695-5036-436b-bc8f-cac1a4000f8f to disappear
Aug  5 13:11:18.480: INFO: Pod pod-b3c7c695-5036-436b-bc8f-cac1a4000f8f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 13:11:18.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2169" for this suite.
Aug  5 13:11:24.545: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:11:24.620: INFO: namespace emptydir-2169 deletion completed in 6.136203473s

• [SLOW TEST:12.524 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 13:11:24.621: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-1d4c02ed-11ba-4324-bbe7-fb55f7d73f87
STEP: Creating a pod to test consume configMaps
Aug  5 13:11:24.701: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b77c1533-a4eb-4ff3-978f-44011daad101" in namespace "projected-5268" to be "success or failure"
Aug  5 13:11:24.718: INFO: Pod "pod-projected-configmaps-b77c1533-a4eb-4ff3-978f-44011daad101": Phase="Pending", Reason="", readiness=false. Elapsed: 17.756829ms
Aug  5 13:11:26.723: INFO: Pod "pod-projected-configmaps-b77c1533-a4eb-4ff3-978f-44011daad101": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022038642s
Aug  5 13:11:28.728: INFO: Pod "pod-projected-configmaps-b77c1533-a4eb-4ff3-978f-44011daad101": Phase="Running", Reason="", readiness=true. Elapsed: 4.027468539s
Aug  5 13:11:30.733: INFO: Pod "pod-projected-configmaps-b77c1533-a4eb-4ff3-978f-44011daad101": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.031971194s
STEP: Saw pod success
Aug  5 13:11:30.733: INFO: Pod "pod-projected-configmaps-b77c1533-a4eb-4ff3-978f-44011daad101" satisfied condition "success or failure"
Aug  5 13:11:30.736: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-b77c1533-a4eb-4ff3-978f-44011daad101 container projected-configmap-volume-test: 
STEP: delete the pod
Aug  5 13:11:30.808: INFO: Waiting for pod pod-projected-configmaps-b77c1533-a4eb-4ff3-978f-44011daad101 to disappear
Aug  5 13:11:30.813: INFO: Pod pod-projected-configmaps-b77c1533-a4eb-4ff3-978f-44011daad101 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 13:11:30.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5268" for this suite.
Aug  5 13:11:36.829: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:11:36.918: INFO: namespace projected-5268 deletion completed in 6.102121044s

• [SLOW TEST:12.297 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 13:11:36.918: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug  5 13:11:37.004: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3ab4b034-53f3-4054-95e9-43162a14a619" in namespace "downward-api-6439" to be "success or failure"
Aug  5 13:11:37.007: INFO: Pod "downwardapi-volume-3ab4b034-53f3-4054-95e9-43162a14a619": Phase="Pending", Reason="", readiness=false. Elapsed: 3.231056ms
Aug  5 13:11:39.082: INFO: Pod "downwardapi-volume-3ab4b034-53f3-4054-95e9-43162a14a619": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077937878s
Aug  5 13:11:41.086: INFO: Pod "downwardapi-volume-3ab4b034-53f3-4054-95e9-43162a14a619": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.082749104s
STEP: Saw pod success
Aug  5 13:11:41.086: INFO: Pod "downwardapi-volume-3ab4b034-53f3-4054-95e9-43162a14a619" satisfied condition "success or failure"
Aug  5 13:11:41.090: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-3ab4b034-53f3-4054-95e9-43162a14a619 container client-container: 
STEP: delete the pod
Aug  5 13:11:41.111: INFO: Waiting for pod downwardapi-volume-3ab4b034-53f3-4054-95e9-43162a14a619 to disappear
Aug  5 13:11:41.153: INFO: Pod downwardapi-volume-3ab4b034-53f3-4054-95e9-43162a14a619 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 13:11:41.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6439" for this suite.
Aug  5 13:11:49.185: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:11:49.420: INFO: namespace downward-api-6439 deletion completed in 8.262121662s

• [SLOW TEST:12.502 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 13:11:49.420: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-58a52a04-4479-4f99-b129-fc2d0ca1f470
STEP: Creating secret with name s-test-opt-upd-84739187-12c1-4053-a23f-2e5213e56cad
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-58a52a04-4479-4f99-b129-fc2d0ca1f470
STEP: Updating secret s-test-opt-upd-84739187-12c1-4053-a23f-2e5213e56cad
STEP: Creating secret with name s-test-opt-create-d308a15d-625d-4744-a15e-c4a977b7de2c
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 13:13:28.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2102" for this suite.
Aug  5 13:13:50.441: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:13:50.511: INFO: namespace secrets-2102 deletion completed in 22.091637346s

• [SLOW TEST:121.091 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 13:13:50.511: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Aug  5 13:13:55.128: INFO: Successfully updated pod "annotationupdate50730212-7aca-405d-8ff9-3ac8775521d9"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 13:13:59.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2515" for this suite.
Aug  5 13:14:21.182: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:14:21.279: INFO: namespace downward-api-2515 deletion completed in 22.116903438s

• [SLOW TEST:30.768 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 13:14:21.280: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-1d71be40-2434-46be-9d17-a614ef8d7fb4
STEP: Creating a pod to test consume configMaps
Aug  5 13:14:21.392: INFO: Waiting up to 5m0s for pod "pod-configmaps-16e8e5ec-a8d3-4255-bb9f-343e87a80fb4" in namespace "configmap-9303" to be "success or failure"
Aug  5 13:14:21.400: INFO: Pod "pod-configmaps-16e8e5ec-a8d3-4255-bb9f-343e87a80fb4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.351772ms
Aug  5 13:14:23.449: INFO: Pod "pod-configmaps-16e8e5ec-a8d3-4255-bb9f-343e87a80fb4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05692397s
Aug  5 13:14:25.453: INFO: Pod "pod-configmaps-16e8e5ec-a8d3-4255-bb9f-343e87a80fb4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.060663726s
STEP: Saw pod success
Aug  5 13:14:25.453: INFO: Pod "pod-configmaps-16e8e5ec-a8d3-4255-bb9f-343e87a80fb4" satisfied condition "success or failure"
Aug  5 13:14:25.455: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-16e8e5ec-a8d3-4255-bb9f-343e87a80fb4 container configmap-volume-test: 
STEP: delete the pod
Aug  5 13:14:25.588: INFO: Waiting for pod pod-configmaps-16e8e5ec-a8d3-4255-bb9f-343e87a80fb4 to disappear
Aug  5 13:14:25.669: INFO: Pod pod-configmaps-16e8e5ec-a8d3-4255-bb9f-343e87a80fb4 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 13:14:25.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9303" for this suite.
Aug  5 13:14:31.745: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:14:31.821: INFO: namespace configmap-9303 deletion completed in 6.148182773s

• [SLOW TEST:10.541 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 13:14:31.822: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override command
Aug  5 13:14:31.857: INFO: Waiting up to 5m0s for pod "client-containers-97de2d4e-e0e8-49c5-8b4d-4fa5f2030799" in namespace "containers-7375" to be "success or failure"
Aug  5 13:14:31.872: INFO: Pod "client-containers-97de2d4e-e0e8-49c5-8b4d-4fa5f2030799": Phase="Pending", Reason="", readiness=false. Elapsed: 14.940354ms
Aug  5 13:14:33.876: INFO: Pod "client-containers-97de2d4e-e0e8-49c5-8b4d-4fa5f2030799": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019005399s
Aug  5 13:14:35.880: INFO: Pod "client-containers-97de2d4e-e0e8-49c5-8b4d-4fa5f2030799": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023207251s
STEP: Saw pod success
Aug  5 13:14:35.880: INFO: Pod "client-containers-97de2d4e-e0e8-49c5-8b4d-4fa5f2030799" satisfied condition "success or failure"
Aug  5 13:14:35.883: INFO: Trying to get logs from node iruya-worker2 pod client-containers-97de2d4e-e0e8-49c5-8b4d-4fa5f2030799 container test-container: 
STEP: delete the pod
Aug  5 13:14:35.904: INFO: Waiting for pod client-containers-97de2d4e-e0e8-49c5-8b4d-4fa5f2030799 to disappear
Aug  5 13:14:35.908: INFO: Pod client-containers-97de2d4e-e0e8-49c5-8b4d-4fa5f2030799 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 13:14:35.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-7375" for this suite.
Aug  5 13:14:41.937: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:14:42.015: INFO: namespace containers-7375 deletion completed in 6.104160712s

• [SLOW TEST:10.194 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 13:14:42.016: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug  5 13:14:42.067: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Aug  5 13:14:42.231: INFO: stderr: ""
Aug  5 13:14:42.231: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.12\", GitCommit:\"e2a822d9f3c2fdb5c9bfbe64313cf9f657f0a725\", GitTreeState:\"clean\", BuildDate:\"2020-07-09T18:54:28Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.12\", GitCommit:\"e2a822d9f3c2fdb5c9bfbe64313cf9f657f0a725\", GitTreeState:\"clean\", BuildDate:\"2020-07-19T21:08:45Z\", GoVersion:\"go1.12.17\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 13:14:42.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8729" for this suite.
Aug  5 13:14:48.259: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:14:48.341: INFO: namespace kubectl-8729 deletion completed in 6.094526356s

• [SLOW TEST:6.325 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl version
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check is all data is printed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 13:14:48.341: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Aug  5 13:14:52.997: INFO: Successfully updated pod "pod-update-activedeadlineseconds-522842f2-0624-4f25-8411-f9e1f4037297"
Aug  5 13:14:52.997: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-522842f2-0624-4f25-8411-f9e1f4037297" in namespace "pods-3149" to be "terminated due to deadline exceeded"
Aug  5 13:14:53.005: INFO: Pod "pod-update-activedeadlineseconds-522842f2-0624-4f25-8411-f9e1f4037297": Phase="Running", Reason="", readiness=true. Elapsed: 8.155537ms
Aug  5 13:14:55.010: INFO: Pod "pod-update-activedeadlineseconds-522842f2-0624-4f25-8411-f9e1f4037297": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.012689975s
Aug  5 13:14:55.010: INFO: Pod "pod-update-activedeadlineseconds-522842f2-0624-4f25-8411-f9e1f4037297" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 13:14:55.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3149" for this suite.
Aug  5 13:15:01.027: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:15:01.100: INFO: namespace pods-3149 deletion completed in 6.085488465s

• [SLOW TEST:12.759 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 13:15:01.101: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug  5 13:15:01.147: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 13:15:05.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2315" for this suite.
Aug  5 13:15:47.254: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:15:47.330: INFO: namespace pods-2315 deletion completed in 42.095664913s

• [SLOW TEST:46.229 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 13:15:47.330: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Aug  5 13:15:53.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-b5d53da9-69c2-4263-8c90-a09b9789d34c -c busybox-main-container --namespace=emptydir-9138 -- cat /usr/share/volumeshare/shareddata.txt'
Aug  5 13:15:56.403: INFO: stderr: "I0805 13:15:56.336578     322 log.go:172] (0xc000b7e420) (0xc000b4a960) Create stream\nI0805 13:15:56.336617     322 log.go:172] (0xc000b7e420) (0xc000b4a960) Stream added, broadcasting: 1\nI0805 13:15:56.340144     322 log.go:172] (0xc000b7e420) Reply frame received for 1\nI0805 13:15:56.340176     322 log.go:172] (0xc000b7e420) (0xc000b4a000) Create stream\nI0805 13:15:56.340184     322 log.go:172] (0xc000b7e420) (0xc000b4a000) Stream added, broadcasting: 3\nI0805 13:15:56.341262     322 log.go:172] (0xc000b7e420) Reply frame received for 3\nI0805 13:15:56.341302     322 log.go:172] (0xc000b7e420) (0xc0003aa320) Create stream\nI0805 13:15:56.341315     322 log.go:172] (0xc000b7e420) (0xc0003aa320) Stream added, broadcasting: 5\nI0805 13:15:56.342231     322 log.go:172] (0xc000b7e420) Reply frame received for 5\nI0805 13:15:56.393823     322 log.go:172] (0xc000b7e420) Data frame received for 5\nI0805 13:15:56.393861     322 log.go:172] (0xc0003aa320) (5) Data frame handling\nI0805 13:15:56.393902     322 log.go:172] (0xc000b7e420) Data frame received for 3\nI0805 13:15:56.393944     322 log.go:172] (0xc000b4a000) (3) Data frame handling\nI0805 13:15:56.393973     322 log.go:172] (0xc000b4a000) (3) Data frame sent\nI0805 13:15:56.393990     322 log.go:172] (0xc000b7e420) Data frame received for 3\nI0805 13:15:56.394004     322 log.go:172] (0xc000b4a000) (3) Data frame handling\nI0805 13:15:56.395563     322 log.go:172] (0xc000b7e420) Data frame received for 1\nI0805 13:15:56.395587     322 log.go:172] (0xc000b4a960) (1) Data frame handling\nI0805 13:15:56.395615     322 log.go:172] (0xc000b4a960) (1) Data frame sent\nI0805 13:15:56.395645     322 log.go:172] (0xc000b7e420) (0xc000b4a960) Stream removed, broadcasting: 1\nI0805 13:15:56.395685     322 log.go:172] (0xc000b7e420) Go away received\nI0805 13:15:56.396204     322 log.go:172] (0xc000b7e420) (0xc000b4a960) Stream removed, broadcasting: 1\nI0805 13:15:56.396229     322 log.go:172] (0xc000b7e420) (0xc000b4a000) Stream removed, broadcasting: 3\nI0805 13:15:56.396247     322 log.go:172] (0xc000b7e420) (0xc0003aa320) Stream removed, broadcasting: 5\n"
Aug  5 13:15:56.403: INFO: stdout: "Hello from the busy-box sub-container\n"
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 13:15:56.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9138" for this suite.
Aug  5 13:16:02.429: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:16:02.497: INFO: namespace emptydir-9138 deletion completed in 6.088929367s

• [SLOW TEST:15.167 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 13:16:02.497: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Aug  5 13:16:07.118: INFO: Successfully updated pod "pod-update-47ef6ab6-e545-4307-b91a-797fa0b8f5b4"
STEP: verifying the updated pod is in kubernetes
Aug  5 13:16:07.129: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 13:16:07.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4247" for this suite.
Aug  5 13:16:29.153: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:16:29.299: INFO: namespace pods-4247 deletion completed in 22.165697698s

• [SLOW TEST:26.801 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 13:16:29.299: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 13:16:35.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-5491" for this suite.
Aug  5 13:16:41.633: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:16:41.709: INFO: namespace namespaces-5491 deletion completed in 6.087679901s
STEP: Destroying namespace "nsdeletetest-2840" for this suite.
Aug  5 13:16:41.711: INFO: Namespace nsdeletetest-2840 was already deleted
STEP: Destroying namespace "nsdeletetest-6231" for this suite.
Aug  5 13:16:47.736: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:16:47.838: INFO: namespace nsdeletetest-6231 deletion completed in 6.127581395s

• [SLOW TEST:18.540 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 13:16:47.839: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug  5 13:17:15.948: INFO: Container started at 2020-08-05 13:16:50 +0000 UTC, pod became ready at 2020-08-05 13:17:14 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 13:17:15.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-5681" for this suite.
Aug  5 13:17:37.964: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:17:38.058: INFO: namespace container-probe-5681 deletion completed in 22.106604805s

• [SLOW TEST:50.220 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 13:17:38.059: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Aug  5 13:17:42.210: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 13:17:42.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-7560" for this suite.
Aug  5 13:17:48.288: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:17:48.360: INFO: namespace container-runtime-7560 deletion completed in 6.10156393s

• [SLOW TEST:10.301 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 13:17:48.360: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Aug  5 13:17:48.393: INFO: PodSpec: initContainers in spec.initContainers
Aug  5 13:18:38.855: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-b7255cf1-f51e-4020-98a6-776d17405283", GenerateName:"", Namespace:"init-container-5870", SelfLink:"/api/v1/namespaces/init-container-5870/pods/pod-init-b7255cf1-f51e-4020-98a6-776d17405283", UID:"c2972741-52b6-4b83-8b40-3b620df79e1c", ResourceVersion:"3093421", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63732230268, loc:(*time.Location)(0x7eb18c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"393774139"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-8zf7g", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc00151bcc0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-8zf7g", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-8zf7g", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-8zf7g", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002dd1298), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00179d920), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002dd1320)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002dd1340)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002dd1348), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002dd134c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732230268, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732230268, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732230268, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732230268, loc:(*time.Location)(0x7eb18c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.5", PodIP:"10.244.1.51", StartTime:(*v1.Time)(0xc002148800), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00253c460)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00253c4d0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://3c4569277e168e31177a0b2a8354b46e83dd6155605767d6b2c065ef9665600b"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002148840), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002148820), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 13:18:38.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-5870" for this suite.
Aug  5 13:19:00.959: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:19:01.068: INFO: namespace init-container-5870 deletion completed in 22.122689437s

• [SLOW TEST:72.708 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 13:19:01.068: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: executing a command with run --rm and attach with stdin
Aug  5 13:19:01.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1493 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Aug  5 13:19:04.573: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0805 13:19:04.499473     350 log.go:172] (0xc0009f4210) (0xc000766140) Create stream\nI0805 13:19:04.499518     350 log.go:172] (0xc0009f4210) (0xc000766140) Stream added, broadcasting: 1\nI0805 13:19:04.501478     350 log.go:172] (0xc0009f4210) Reply frame received for 1\nI0805 13:19:04.501522     350 log.go:172] (0xc0009f4210) (0xc0007661e0) Create stream\nI0805 13:19:04.501531     350 log.go:172] (0xc0009f4210) (0xc0007661e0) Stream added, broadcasting: 3\nI0805 13:19:04.502321     350 log.go:172] (0xc0009f4210) Reply frame received for 3\nI0805 13:19:04.502371     350 log.go:172] (0xc0009f4210) (0xc0009a06e0) Create stream\nI0805 13:19:04.502382     350 log.go:172] (0xc0009f4210) (0xc0009a06e0) Stream added, broadcasting: 5\nI0805 13:19:04.503115     350 log.go:172] (0xc0009f4210) Reply frame received for 5\nI0805 13:19:04.503148     350 log.go:172] (0xc0009f4210) (0xc0007e6000) Create stream\nI0805 13:19:04.503159     350 log.go:172] (0xc0009f4210) (0xc0007e6000) Stream added, broadcasting: 7\nI0805 13:19:04.503830     350 log.go:172] (0xc0009f4210) Reply frame received for 7\nI0805 13:19:04.503930     350 log.go:172] (0xc0007661e0) (3) Writing data frame\nI0805 13:19:04.504009     350 log.go:172] (0xc0007661e0) (3) Writing data frame\nI0805 13:19:04.504706     350 log.go:172] (0xc0009f4210) Data frame received for 5\nI0805 13:19:04.504716     350 log.go:172] (0xc0009a06e0) (5) Data frame handling\nI0805 13:19:04.504789     350 log.go:172] (0xc0009a06e0) (5) Data frame sent\nI0805 13:19:04.505355     350 log.go:172] (0xc0009f4210) Data frame received for 5\nI0805 13:19:04.505369     350 log.go:172] (0xc0009a06e0) (5) Data frame handling\nI0805 13:19:04.505381     350 log.go:172] (0xc0009a06e0) (5) Data frame sent\nI0805 13:19:04.534120     350 log.go:172] (0xc0009f4210) Data frame received for 5\nI0805 13:19:04.534168     350 log.go:172] (0xc0009a06e0) (5) Data frame handling\nI0805 13:19:04.534205     350 log.go:172] (0xc0009f4210) Data frame received for 7\nI0805 13:19:04.534252     350 log.go:172] (0xc0007e6000) (7) Data frame handling\nI0805 13:19:04.534433     350 log.go:172] (0xc0009f4210) Data frame received for 1\nI0805 13:19:04.534460     350 log.go:172] (0xc000766140) (1) Data frame handling\nI0805 13:19:04.534475     350 log.go:172] (0xc000766140) (1) Data frame sent\nI0805 13:19:04.534489     350 log.go:172] (0xc0009f4210) (0xc000766140) Stream removed, broadcasting: 1\nI0805 13:19:04.534599     350 log.go:172] (0xc0009f4210) (0xc000766140) Stream removed, broadcasting: 1\nI0805 13:19:04.534623     350 log.go:172] (0xc0009f4210) (0xc0007661e0) Stream removed, broadcasting: 3\nI0805 13:19:04.534642     350 log.go:172] (0xc0009f4210) (0xc0009a06e0) Stream removed, broadcasting: 5\nI0805 13:19:04.534880     350 log.go:172] (0xc0009f4210) Go away received\nI0805 13:19:04.534917     350 log.go:172] (0xc0009f4210) (0xc0007e6000) Stream removed, broadcasting: 7\n"
Aug  5 13:19:04.573: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 13:19:06.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1493" for this suite.
Aug  5 13:19:12.607: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:19:12.674: INFO: namespace kubectl-1493 deletion completed in 6.082722371s

• [SLOW TEST:11.605 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 13:19:12.674: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Aug  5 13:19:20.227: INFO: 0 pods remaining
Aug  5 13:19:20.227: INFO: 0 pods has nil DeletionTimestamp
Aug  5 13:19:20.227: INFO: 
Aug  5 13:19:20.749: INFO: 0 pods remaining
Aug  5 13:19:20.749: INFO: 0 pods has nil DeletionTimestamp
Aug  5 13:19:20.749: INFO: 
STEP: Gathering metrics
W0805 13:19:21.256412       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug  5 13:19:21.256: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 13:19:21.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-8513" for this suite.
Aug  5 13:19:27.579: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:19:27.661: INFO: namespace gc-8513 deletion completed in 6.401924982s

• [SLOW TEST:14.987 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 13:19:27.661: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Aug  5 13:19:35.805: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug  5 13:19:35.813: INFO: Pod pod-with-poststart-exec-hook still exists
Aug  5 13:19:37.813: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug  5 13:19:37.818: INFO: Pod pod-with-poststart-exec-hook still exists
Aug  5 13:19:39.813: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug  5 13:19:39.818: INFO: Pod pod-with-poststart-exec-hook still exists
Aug  5 13:19:41.814: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug  5 13:19:41.818: INFO: Pod pod-with-poststart-exec-hook still exists
Aug  5 13:19:43.813: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug  5 13:19:43.818: INFO: Pod pod-with-poststart-exec-hook still exists
Aug  5 13:19:45.813: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug  5 13:19:45.817: INFO: Pod pod-with-poststart-exec-hook still exists
Aug  5 13:19:47.814: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug  5 13:19:47.817: INFO: Pod pod-with-poststart-exec-hook still exists
Aug  5 13:19:49.813: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug  5 13:19:49.818: INFO: Pod pod-with-poststart-exec-hook still exists
Aug  5 13:19:51.813: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug  5 13:19:51.816: INFO: Pod pod-with-poststart-exec-hook still exists
Aug  5 13:19:53.813: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug  5 13:19:53.818: INFO: Pod pod-with-poststart-exec-hook still exists
Aug  5 13:19:55.813: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug  5 13:19:55.817: INFO: Pod pod-with-poststart-exec-hook still exists
Aug  5 13:19:57.813: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug  5 13:19:57.818: INFO: Pod pod-with-poststart-exec-hook still exists
Aug  5 13:19:59.813: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug  5 13:19:59.817: INFO: Pod pod-with-poststart-exec-hook still exists
Aug  5 13:20:01.813: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug  5 13:20:01.817: INFO: Pod pod-with-poststart-exec-hook still exists
Aug  5 13:20:03.813: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug  5 13:20:03.818: INFO: Pod pod-with-poststart-exec-hook still exists
Aug  5 13:20:05.813: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug  5 13:20:05.818: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 13:20:05.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-8523" for this suite.
Aug  5 13:20:27.841: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:20:27.920: INFO: namespace container-lifecycle-hook-8523 deletion completed in 22.097827907s

• [SLOW TEST:60.259 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 13:20:27.920: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service multi-endpoint-test in namespace services-6630
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6630 to expose endpoints map[]
Aug  5 13:20:28.042: INFO: Get endpoints failed (18.870673ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Aug  5 13:20:29.046: INFO: successfully validated that service multi-endpoint-test in namespace services-6630 exposes endpoints map[] (1.022494906s elapsed)
STEP: Creating pod pod1 in namespace services-6630
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6630 to expose endpoints map[pod1:[100]]
Aug  5 13:20:33.181: INFO: successfully validated that service multi-endpoint-test in namespace services-6630 exposes endpoints map[pod1:[100]] (4.130084942s elapsed)
STEP: Creating pod pod2 in namespace services-6630
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6630 to expose endpoints map[pod1:[100] pod2:[101]]
Aug  5 13:20:37.370: INFO: successfully validated that service multi-endpoint-test in namespace services-6630 exposes endpoints map[pod1:[100] pod2:[101]] (4.185174095s elapsed)
STEP: Deleting pod pod1 in namespace services-6630
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6630 to expose endpoints map[pod2:[101]]
Aug  5 13:20:38.408: INFO: successfully validated that service multi-endpoint-test in namespace services-6630 exposes endpoints map[pod2:[101]] (1.033821099s elapsed)
STEP: Deleting pod pod2 in namespace services-6630
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6630 to expose endpoints map[]
Aug  5 13:20:39.478: INFO: successfully validated that service multi-endpoint-test in namespace services-6630 exposes endpoints map[] (1.064647649s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 13:20:39.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-6630" for this suite.
Aug  5 13:20:45.662: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:20:45.738: INFO: namespace services-6630 deletion completed in 6.0966494s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:17.818 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 13:20:45.739: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Aug  5 13:20:49.921: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 13:20:49.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-1882" for this suite.
Aug  5 13:20:55.994: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:20:56.068: INFO: namespace container-runtime-1882 deletion completed in 6.086153449s

• [SLOW TEST:10.330 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 13:20:56.069: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Aug  5 13:21:00.702: INFO: Successfully updated pod "labelsupdate927886ae-057c-4f09-925f-5c0e059502be"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 13:21:04.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3595" for this suite.
Aug  5 13:21:26.749: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:21:26.822: INFO: namespace projected-3595 deletion completed in 22.087742647s

• [SLOW TEST:30.754 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 13:21:26.823: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-e1d85987-d235-4bf3-aa53-aca7a102f538
STEP: Creating a pod to test consume secrets
Aug  5 13:21:26.915: INFO: Waiting up to 5m0s for pod "pod-secrets-60b5c4b2-52fc-497f-9fb2-22ba5696a945" in namespace "secrets-4027" to be "success or failure"
Aug  5 13:21:26.929: INFO: Pod "pod-secrets-60b5c4b2-52fc-497f-9fb2-22ba5696a945": Phase="Pending", Reason="", readiness=false. Elapsed: 13.853941ms
Aug  5 13:21:28.933: INFO: Pod "pod-secrets-60b5c4b2-52fc-497f-9fb2-22ba5696a945": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018210053s
Aug  5 13:21:30.938: INFO: Pod "pod-secrets-60b5c4b2-52fc-497f-9fb2-22ba5696a945": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022533837s
STEP: Saw pod success
Aug  5 13:21:30.938: INFO: Pod "pod-secrets-60b5c4b2-52fc-497f-9fb2-22ba5696a945" satisfied condition "success or failure"
Aug  5 13:21:30.940: INFO: Trying to get logs from node iruya-worker pod pod-secrets-60b5c4b2-52fc-497f-9fb2-22ba5696a945 container secret-volume-test: 
STEP: delete the pod
Aug  5 13:21:30.963: INFO: Waiting for pod pod-secrets-60b5c4b2-52fc-497f-9fb2-22ba5696a945 to disappear
Aug  5 13:21:31.016: INFO: Pod pod-secrets-60b5c4b2-52fc-497f-9fb2-22ba5696a945 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 13:21:31.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4027" for this suite.
Aug  5 13:21:37.043: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:21:37.133: INFO: namespace secrets-4027 deletion completed in 6.112963445s

• [SLOW TEST:10.310 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 13:21:37.133: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 13:21:41.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-9788" for this suite.
Aug  5 13:21:47.415: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:21:47.491: INFO: namespace emptydir-wrapper-9788 deletion completed in 6.096886692s

• [SLOW TEST:10.358 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 13:21:47.491: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0805 13:22:28.309860       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug  5 13:22:28.309: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 13:22:28.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-4608" for this suite.
Aug  5 13:22:36.328: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:22:36.400: INFO: namespace gc-4608 deletion completed in 8.086041078s

• [SLOW TEST:48.909 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 13:22:36.400: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug  5 13:22:36.625: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Aug  5 13:22:36.757: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  5 13:22:36.771: INFO: Number of nodes with available pods: 0
Aug  5 13:22:36.771: INFO: Node iruya-worker is running more than one daemon pod
Aug  5 13:22:37.882: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  5 13:22:37.885: INFO: Number of nodes with available pods: 0
Aug  5 13:22:37.885: INFO: Node iruya-worker is running more than one daemon pod
Aug  5 13:22:39.043: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  5 13:22:39.211: INFO: Number of nodes with available pods: 0
Aug  5 13:22:39.211: INFO: Node iruya-worker is running more than one daemon pod
Aug  5 13:22:39.775: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  5 13:22:39.778: INFO: Number of nodes with available pods: 0
Aug  5 13:22:39.778: INFO: Node iruya-worker is running more than one daemon pod
Aug  5 13:22:40.799: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  5 13:22:40.809: INFO: Number of nodes with available pods: 0
Aug  5 13:22:40.809: INFO: Node iruya-worker is running more than one daemon pod
Aug  5 13:22:41.776: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  5 13:22:41.778: INFO: Number of nodes with available pods: 2
Aug  5 13:22:41.778: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Aug  5 13:22:41.826: INFO: Wrong image for pod: daemon-set-fhxk4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug  5 13:22:41.826: INFO: Wrong image for pod: daemon-set-gxmjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug  5 13:22:41.839: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  5 13:22:42.846: INFO: Wrong image for pod: daemon-set-fhxk4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug  5 13:22:42.846: INFO: Wrong image for pod: daemon-set-gxmjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug  5 13:22:42.850: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  5 13:22:43.844: INFO: Wrong image for pod: daemon-set-fhxk4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug  5 13:22:43.844: INFO: Wrong image for pod: daemon-set-gxmjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug  5 13:22:43.848: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  5 13:22:44.844: INFO: Wrong image for pod: daemon-set-fhxk4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug  5 13:22:44.844: INFO: Wrong image for pod: daemon-set-gxmjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug  5 13:22:44.847: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  5 13:22:45.844: INFO: Wrong image for pod: daemon-set-fhxk4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug  5 13:22:45.844: INFO: Pod daemon-set-fhxk4 is not available
Aug  5 13:22:45.844: INFO: Wrong image for pod: daemon-set-gxmjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug  5 13:22:45.847: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  5 13:22:46.853: INFO: Wrong image for pod: daemon-set-fhxk4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug  5 13:22:46.853: INFO: Pod daemon-set-fhxk4 is not available
Aug  5 13:22:46.853: INFO: Wrong image for pod: daemon-set-gxmjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug  5 13:22:46.856: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  5 13:22:47.844: INFO: Wrong image for pod: daemon-set-fhxk4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug  5 13:22:47.844: INFO: Pod daemon-set-fhxk4 is not available
Aug  5 13:22:47.844: INFO: Wrong image for pod: daemon-set-gxmjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug  5 13:22:47.849: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  5 13:22:48.844: INFO: Wrong image for pod: daemon-set-fhxk4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug  5 13:22:48.844: INFO: Pod daemon-set-fhxk4 is not available
Aug  5 13:22:48.844: INFO: Wrong image for pod: daemon-set-gxmjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug  5 13:22:48.847: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  5 13:22:49.843: INFO: Wrong image for pod: daemon-set-fhxk4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug  5 13:22:49.843: INFO: Pod daemon-set-fhxk4 is not available
Aug  5 13:22:49.843: INFO: Wrong image for pod: daemon-set-gxmjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug  5 13:22:49.846: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  5 13:22:50.843: INFO: Wrong image for pod: daemon-set-fhxk4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug  5 13:22:50.843: INFO: Pod daemon-set-fhxk4 is not available
Aug  5 13:22:50.843: INFO: Wrong image for pod: daemon-set-gxmjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug  5 13:22:50.851: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  5 13:22:51.844: INFO: Wrong image for pod: daemon-set-fhxk4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug  5 13:22:51.844: INFO: Pod daemon-set-fhxk4 is not available
Aug  5 13:22:51.844: INFO: Wrong image for pod: daemon-set-gxmjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug  5 13:22:51.848: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  5 13:22:52.844: INFO: Wrong image for pod: daemon-set-fhxk4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug  5 13:22:52.844: INFO: Pod daemon-set-fhxk4 is not available
Aug  5 13:22:52.844: INFO: Wrong image for pod: daemon-set-gxmjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug  5 13:22:52.848: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  5 13:22:53.844: INFO: Wrong image for pod: daemon-set-fhxk4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug  5 13:22:53.844: INFO: Pod daemon-set-fhxk4 is not available
Aug  5 13:22:53.844: INFO: Wrong image for pod: daemon-set-gxmjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug  5 13:22:53.849: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  5 13:22:54.844: INFO: Wrong image for pod: daemon-set-fhxk4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug  5 13:22:54.844: INFO: Pod daemon-set-fhxk4 is not available
Aug  5 13:22:54.844: INFO: Wrong image for pod: daemon-set-gxmjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug  5 13:22:54.848: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  5 13:22:55.843: INFO: Wrong image for pod: daemon-set-fhxk4. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug  5 13:22:55.843: INFO: Pod daemon-set-fhxk4 is not available
Aug  5 13:22:55.843: INFO: Wrong image for pod: daemon-set-gxmjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug  5 13:22:55.848: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  5 13:22:56.844: INFO: Wrong image for pod: daemon-set-gxmjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug  5 13:22:56.844: INFO: Pod daemon-set-v4qw5 is not available
Aug  5 13:22:56.848: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  5 13:22:57.844: INFO: Wrong image for pod: daemon-set-gxmjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug  5 13:22:57.844: INFO: Pod daemon-set-v4qw5 is not available
Aug  5 13:22:57.848: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  5 13:22:58.844: INFO: Wrong image for pod: daemon-set-gxmjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug  5 13:22:58.844: INFO: Pod daemon-set-v4qw5 is not available
Aug  5 13:22:58.847: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  5 13:22:59.843: INFO: Wrong image for pod: daemon-set-gxmjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug  5 13:22:59.843: INFO: Pod daemon-set-v4qw5 is not available
Aug  5 13:22:59.848: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  5 13:23:00.843: INFO: Wrong image for pod: daemon-set-gxmjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug  5 13:23:00.847: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  5 13:23:01.844: INFO: Wrong image for pod: daemon-set-gxmjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug  5 13:23:01.844: INFO: Pod daemon-set-gxmjb is not available
Aug  5 13:23:01.848: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  5 13:23:02.843: INFO: Wrong image for pod: daemon-set-gxmjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug  5 13:23:02.843: INFO: Pod daemon-set-gxmjb is not available
Aug  5 13:23:02.848: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  5 13:23:03.844: INFO: Wrong image for pod: daemon-set-gxmjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug  5 13:23:03.844: INFO: Pod daemon-set-gxmjb is not available
Aug  5 13:23:03.848: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  5 13:23:04.847: INFO: Wrong image for pod: daemon-set-gxmjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug  5 13:23:04.847: INFO: Pod daemon-set-gxmjb is not available
Aug  5 13:23:04.856: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  5 13:23:05.844: INFO: Pod daemon-set-q4gh8 is not available
Aug  5 13:23:05.848: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
STEP: Check that daemon pods are still running on every node of the cluster.
Aug  5 13:23:05.852: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  5 13:23:05.855: INFO: Number of nodes with available pods: 1
Aug  5 13:23:05.855: INFO: Node iruya-worker is running more than one daemon pod
Aug  5 13:23:06.860: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  5 13:23:06.864: INFO: Number of nodes with available pods: 1
Aug  5 13:23:06.864: INFO: Node iruya-worker is running more than one daemon pod
Aug  5 13:23:07.860: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  5 13:23:07.864: INFO: Number of nodes with available pods: 1
Aug  5 13:23:07.864: INFO: Node iruya-worker is running more than one daemon pod
Aug  5 13:23:08.860: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  5 13:23:08.863: INFO: Number of nodes with available pods: 2
Aug  5 13:23:08.863: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2066, will wait for the garbage collector to delete the pods
Aug  5 13:23:08.934: INFO: Deleting DaemonSet.extensions daemon-set took: 5.722174ms
Aug  5 13:23:09.234: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.278563ms
Aug  5 13:23:16.337: INFO: Number of nodes with available pods: 0
Aug  5 13:23:16.338: INFO: Number of running nodes: 0, number of available pods: 0
Aug  5 13:23:16.340: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2066/daemonsets","resourceVersion":"3094606"},"items":null}

Aug  5 13:23:16.343: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2066/pods","resourceVersion":"3094606"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 13:23:16.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-2066" for this suite.
Aug  5 13:23:22.381: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:23:22.480: INFO: namespace daemonsets-2066 deletion completed in 6.11256375s

• [SLOW TEST:46.080 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 13:23:22.482: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6447.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6447.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug  5 13:23:28.595: INFO: DNS probes using dns-6447/dns-test-a47ae797-8101-44ed-bb67-4b5ec0d6ea9c succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 13:23:28.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6447" for this suite.
Aug  5 13:23:34.731: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:23:34.805: INFO: namespace dns-6447 deletion completed in 6.118972349s

• [SLOW TEST:12.323 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 13:23:34.806: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 13:24:34.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-9825" for this suite.
Aug  5 13:24:56.899: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:24:56.964: INFO: namespace container-probe-9825 deletion completed in 22.08111369s

• [SLOW TEST:82.159 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 13:24:56.965: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name projected-secret-test-c90ecf6f-c86d-43d2-a833-958fb329364c
STEP: Creating a pod to test consume secrets
Aug  5 13:24:57.090: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-00e647e1-3645-4a31-9ccc-db814343e887" in namespace "projected-1443" to be "success or failure"
Aug  5 13:24:57.100: INFO: Pod "pod-projected-secrets-00e647e1-3645-4a31-9ccc-db814343e887": Phase="Pending", Reason="", readiness=false. Elapsed: 10.140076ms
Aug  5 13:24:59.105: INFO: Pod "pod-projected-secrets-00e647e1-3645-4a31-9ccc-db814343e887": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014775107s
Aug  5 13:25:01.109: INFO: Pod "pod-projected-secrets-00e647e1-3645-4a31-9ccc-db814343e887": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018835321s
STEP: Saw pod success
Aug  5 13:25:01.109: INFO: Pod "pod-projected-secrets-00e647e1-3645-4a31-9ccc-db814343e887" satisfied condition "success or failure"
Aug  5 13:25:01.111: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-00e647e1-3645-4a31-9ccc-db814343e887 container secret-volume-test: 
STEP: delete the pod
Aug  5 13:25:01.155: INFO: Waiting for pod pod-projected-secrets-00e647e1-3645-4a31-9ccc-db814343e887 to disappear
Aug  5 13:25:01.164: INFO: Pod pod-projected-secrets-00e647e1-3645-4a31-9ccc-db814343e887 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 13:25:01.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1443" for this suite.
Aug  5 13:25:07.178: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:25:07.231: INFO: namespace projected-1443 deletion completed in 6.062227339s

• [SLOW TEST:10.266 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 13:25:07.231: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Aug  5 13:25:07.320: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  5 13:25:07.326: INFO: Number of nodes with available pods: 0
Aug  5 13:25:07.326: INFO: Node iruya-worker is running more than one daemon pod
Aug  5 13:25:08.393: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  5 13:25:08.396: INFO: Number of nodes with available pods: 0
Aug  5 13:25:08.396: INFO: Node iruya-worker is running more than one daemon pod
Aug  5 13:25:09.329: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  5 13:25:09.331: INFO: Number of nodes with available pods: 0
Aug  5 13:25:09.331: INFO: Node iruya-worker is running more than one daemon pod
Aug  5 13:25:11.131: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  5 13:25:11.134: INFO: Number of nodes with available pods: 0
Aug  5 13:25:11.134: INFO: Node iruya-worker is running more than one daemon pod
Aug  5 13:25:11.330: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  5 13:25:11.333: INFO: Number of nodes with available pods: 0
Aug  5 13:25:11.333: INFO: Node iruya-worker is running more than one daemon pod
Aug  5 13:25:12.376: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  5 13:25:12.824: INFO: Number of nodes with available pods: 0
Aug  5 13:25:12.824: INFO: Node iruya-worker is running more than one daemon pod
Aug  5 13:25:13.330: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  5 13:25:13.332: INFO: Number of nodes with available pods: 0
Aug  5 13:25:13.332: INFO: Node iruya-worker is running more than one daemon pod
Aug  5 13:25:14.719: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  5 13:25:14.950: INFO: Number of nodes with available pods: 0
Aug  5 13:25:14.950: INFO: Node iruya-worker is running more than one daemon pod
Aug  5 13:25:15.545: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  5 13:25:15.585: INFO: Number of nodes with available pods: 0
Aug  5 13:25:15.585: INFO: Node iruya-worker is running more than one daemon pod
Aug  5 13:25:16.333: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  5 13:25:16.335: INFO: Number of nodes with available pods: 0
Aug  5 13:25:16.335: INFO: Node iruya-worker is running more than one daemon pod
Aug  5 13:25:17.331: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  5 13:25:17.334: INFO: Number of nodes with available pods: 2
Aug  5 13:25:17.334: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Aug  5 13:25:17.353: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  5 13:25:17.358: INFO: Number of nodes with available pods: 2
Aug  5 13:25:17.358: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5891, will wait for the garbage collector to delete the pods
Aug  5 13:25:18.473: INFO: Deleting DaemonSet.extensions daemon-set took: 6.240779ms
Aug  5 13:25:18.773: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.289611ms
Aug  5 13:25:26.380: INFO: Number of nodes with available pods: 0
Aug  5 13:25:26.380: INFO: Number of running nodes: 0, number of available pods: 0
Aug  5 13:25:26.383: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5891/daemonsets","resourceVersion":"3095037"},"items":null}

Aug  5 13:25:26.385: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5891/pods","resourceVersion":"3095037"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 13:25:26.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-5891" for this suite.
Aug  5 13:25:32.408: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:25:32.497: INFO: namespace daemonsets-5891 deletion completed in 6.102628967s

• [SLOW TEST:25.266 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 13:25:32.498: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210
STEP: creating the pod
Aug  5 13:25:32.593: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-226'
Aug  5 13:25:32.981: INFO: stderr: ""
Aug  5 13:25:32.982: INFO: stdout: "pod/pause created\n"
Aug  5 13:25:32.982: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Aug  5 13:25:32.982: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-226" to be "running and ready"
Aug  5 13:25:33.000: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 18.186798ms
Aug  5 13:25:35.004: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021985064s
Aug  5 13:25:37.008: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.025991969s
Aug  5 13:25:37.008: INFO: Pod "pause" satisfied condition "running and ready"
Aug  5 13:25:37.008: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: adding the label testing-label with value testing-label-value to a pod
Aug  5 13:25:37.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-226'
Aug  5 13:25:37.126: INFO: stderr: ""
Aug  5 13:25:37.126: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Aug  5 13:25:37.126: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-226'
Aug  5 13:25:37.229: INFO: stderr: ""
Aug  5 13:25:37.229: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          5s    testing-label-value\n"
STEP: removing the label testing-label of a pod
Aug  5 13:25:37.229: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-226'
Aug  5 13:25:37.341: INFO: stderr: ""
Aug  5 13:25:37.341: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Aug  5 13:25:37.341: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-226'
Aug  5 13:25:37.442: INFO: stderr: ""
Aug  5 13:25:37.442: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          5s    \n"
[AfterEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217
STEP: using delete to clean up resources
Aug  5 13:25:37.442: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-226'
Aug  5 13:25:37.559: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug  5 13:25:37.559: INFO: stdout: "pod \"pause\" force deleted\n"
Aug  5 13:25:37.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-226'
Aug  5 13:25:37.662: INFO: stderr: "No resources found.\n"
Aug  5 13:25:37.662: INFO: stdout: ""
Aug  5 13:25:37.662: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-226 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Aug  5 13:25:37.872: INFO: stderr: ""
Aug  5 13:25:37.872: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 13:25:37.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-226" for this suite.
Aug  5 13:25:43.930: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:25:43.994: INFO: namespace kubectl-226 deletion completed in 6.117334136s

• [SLOW TEST:11.496 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 13:25:43.994: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's command
Aug  5 13:25:44.084: INFO: Waiting up to 5m0s for pod "var-expansion-7517f49b-cb9e-4dc0-9fd4-83f6e645543e" in namespace "var-expansion-9746" to be "success or failure"
Aug  5 13:25:44.139: INFO: Pod "var-expansion-7517f49b-cb9e-4dc0-9fd4-83f6e645543e": Phase="Pending", Reason="", readiness=false. Elapsed: 54.809671ms
Aug  5 13:25:46.143: INFO: Pod "var-expansion-7517f49b-cb9e-4dc0-9fd4-83f6e645543e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059094531s
Aug  5 13:25:48.147: INFO: Pod "var-expansion-7517f49b-cb9e-4dc0-9fd4-83f6e645543e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.063286196s
STEP: Saw pod success
Aug  5 13:25:48.147: INFO: Pod "var-expansion-7517f49b-cb9e-4dc0-9fd4-83f6e645543e" satisfied condition "success or failure"
Aug  5 13:25:48.151: INFO: Trying to get logs from node iruya-worker pod var-expansion-7517f49b-cb9e-4dc0-9fd4-83f6e645543e container dapi-container: 
STEP: delete the pod
Aug  5 13:25:48.188: INFO: Waiting for pod var-expansion-7517f49b-cb9e-4dc0-9fd4-83f6e645543e to disappear
Aug  5 13:25:48.214: INFO: Pod var-expansion-7517f49b-cb9e-4dc0-9fd4-83f6e645543e no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 13:25:48.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-9746" for this suite.
Aug  5 13:25:54.229: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:25:54.307: INFO: namespace var-expansion-9746 deletion completed in 6.08999698s

• [SLOW TEST:10.313 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 13:25:54.307: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-ff17122a-defc-4020-9b8b-14957fc7226b
STEP: Creating a pod to test consume secrets
Aug  5 13:25:54.408: INFO: Waiting up to 5m0s for pod "pod-secrets-6b21791d-759c-45aa-885f-94c75d67b328" in namespace "secrets-165" to be "success or failure"
Aug  5 13:25:54.413: INFO: Pod "pod-secrets-6b21791d-759c-45aa-885f-94c75d67b328": Phase="Pending", Reason="", readiness=false. Elapsed: 4.133064ms
Aug  5 13:25:56.417: INFO: Pod "pod-secrets-6b21791d-759c-45aa-885f-94c75d67b328": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008734571s
Aug  5 13:25:58.422: INFO: Pod "pod-secrets-6b21791d-759c-45aa-885f-94c75d67b328": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013087844s
STEP: Saw pod success
Aug  5 13:25:58.422: INFO: Pod "pod-secrets-6b21791d-759c-45aa-885f-94c75d67b328" satisfied condition "success or failure"
Aug  5 13:25:58.425: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-6b21791d-759c-45aa-885f-94c75d67b328 container secret-volume-test: 
STEP: delete the pod
Aug  5 13:25:58.462: INFO: Waiting for pod pod-secrets-6b21791d-759c-45aa-885f-94c75d67b328 to disappear
Aug  5 13:25:58.466: INFO: Pod pod-secrets-6b21791d-759c-45aa-885f-94c75d67b328 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 13:25:58.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-165" for this suite.
Aug  5 13:26:04.482: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:26:04.560: INFO: namespace secrets-165 deletion completed in 6.089845227s

• [SLOW TEST:10.252 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 13:26:04.560: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Aug  5 13:26:04.589: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8018'
Aug  5 13:26:07.380: INFO: stderr: ""
Aug  5 13:26:07.380: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug  5 13:26:07.380: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8018'
Aug  5 13:26:07.495: INFO: stderr: ""
Aug  5 13:26:07.495: INFO: stdout: "update-demo-nautilus-dkh5j update-demo-nautilus-dx48r "
Aug  5 13:26:07.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dkh5j -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8018'
Aug  5 13:26:07.587: INFO: stderr: ""
Aug  5 13:26:07.587: INFO: stdout: ""
Aug  5 13:26:07.587: INFO: update-demo-nautilus-dkh5j is created but not running
Aug  5 13:26:12.587: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8018'
Aug  5 13:26:12.701: INFO: stderr: ""
Aug  5 13:26:12.701: INFO: stdout: "update-demo-nautilus-dkh5j update-demo-nautilus-dx48r "
Aug  5 13:26:12.701: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dkh5j -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8018'
Aug  5 13:26:12.798: INFO: stderr: ""
Aug  5 13:26:12.798: INFO: stdout: "true"
Aug  5 13:26:12.798: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dkh5j -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8018'
Aug  5 13:26:12.897: INFO: stderr: ""
Aug  5 13:26:12.897: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug  5 13:26:12.897: INFO: validating pod update-demo-nautilus-dkh5j
Aug  5 13:26:12.901: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug  5 13:26:12.901: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug  5 13:26:12.901: INFO: update-demo-nautilus-dkh5j is verified up and running
Aug  5 13:26:12.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dx48r -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8018'
Aug  5 13:26:13.000: INFO: stderr: ""
Aug  5 13:26:13.000: INFO: stdout: "true"
Aug  5 13:26:13.000: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dx48r -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8018'
Aug  5 13:26:13.098: INFO: stderr: ""
Aug  5 13:26:13.098: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug  5 13:26:13.098: INFO: validating pod update-demo-nautilus-dx48r
Aug  5 13:26:13.102: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug  5 13:26:13.102: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug  5 13:26:13.102: INFO: update-demo-nautilus-dx48r is verified up and running
STEP: scaling down the replication controller
Aug  5 13:26:13.120: INFO: scanned /root for discovery docs: 
Aug  5 13:26:13.120: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-8018'
Aug  5 13:26:14.260: INFO: stderr: ""
Aug  5 13:26:14.260: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug  5 13:26:14.260: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8018'
Aug  5 13:26:14.355: INFO: stderr: ""
Aug  5 13:26:14.355: INFO: stdout: "update-demo-nautilus-dkh5j update-demo-nautilus-dx48r "
STEP: Replicas for name=update-demo: expected=1 actual=2
Aug  5 13:26:19.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8018'
Aug  5 13:26:19.452: INFO: stderr: ""
Aug  5 13:26:19.452: INFO: stdout: "update-demo-nautilus-dkh5j update-demo-nautilus-dx48r "
STEP: Replicas for name=update-demo: expected=1 actual=2
Aug  5 13:26:24.452: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8018'
Aug  5 13:26:24.543: INFO: stderr: ""
Aug  5 13:26:24.543: INFO: stdout: "update-demo-nautilus-dkh5j update-demo-nautilus-dx48r "
STEP: Replicas for name=update-demo: expected=1 actual=2
Aug  5 13:26:29.543: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8018'
Aug  5 13:26:29.642: INFO: stderr: ""
Aug  5 13:26:29.642: INFO: stdout: "update-demo-nautilus-dx48r "
Aug  5 13:26:29.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dx48r -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8018'
Aug  5 13:26:29.742: INFO: stderr: ""
Aug  5 13:26:29.742: INFO: stdout: "true"
Aug  5 13:26:29.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dx48r -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8018'
Aug  5 13:26:29.841: INFO: stderr: ""
Aug  5 13:26:29.841: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug  5 13:26:29.841: INFO: validating pod update-demo-nautilus-dx48r
Aug  5 13:26:29.844: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug  5 13:26:29.844: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug  5 13:26:29.844: INFO: update-demo-nautilus-dx48r is verified up and running
STEP: scaling up the replication controller
Aug  5 13:26:29.846: INFO: scanned /root for discovery docs: 
Aug  5 13:26:29.846: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-8018'
Aug  5 13:26:30.992: INFO: stderr: ""
Aug  5 13:26:30.992: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug  5 13:26:30.992: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8018'
Aug  5 13:26:31.103: INFO: stderr: ""
Aug  5 13:26:31.103: INFO: stdout: "update-demo-nautilus-dx48r update-demo-nautilus-j4p5g "
Aug  5 13:26:31.103: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dx48r -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8018'
Aug  5 13:26:31.194: INFO: stderr: ""
Aug  5 13:26:31.194: INFO: stdout: "true"
Aug  5 13:26:31.194: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dx48r -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8018'
Aug  5 13:26:31.279: INFO: stderr: ""
Aug  5 13:26:31.279: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug  5 13:26:31.279: INFO: validating pod update-demo-nautilus-dx48r
Aug  5 13:26:31.282: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug  5 13:26:31.282: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug  5 13:26:31.282: INFO: update-demo-nautilus-dx48r is verified up and running
Aug  5 13:26:31.282: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j4p5g -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8018'
Aug  5 13:26:31.389: INFO: stderr: ""
Aug  5 13:26:31.389: INFO: stdout: ""
Aug  5 13:26:31.389: INFO: update-demo-nautilus-j4p5g is created but not running
Aug  5 13:26:36.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8018'
Aug  5 13:26:36.500: INFO: stderr: ""
Aug  5 13:26:36.500: INFO: stdout: "update-demo-nautilus-dx48r update-demo-nautilus-j4p5g "
Aug  5 13:26:36.500: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dx48r -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8018'
Aug  5 13:26:36.596: INFO: stderr: ""
Aug  5 13:26:36.596: INFO: stdout: "true"
Aug  5 13:26:36.596: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dx48r -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8018'
Aug  5 13:26:36.706: INFO: stderr: ""
Aug  5 13:26:36.706: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug  5 13:26:36.706: INFO: validating pod update-demo-nautilus-dx48r
Aug  5 13:26:36.709: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug  5 13:26:36.709: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug  5 13:26:36.709: INFO: update-demo-nautilus-dx48r is verified up and running
Aug  5 13:26:36.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j4p5g -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8018'
Aug  5 13:26:36.805: INFO: stderr: ""
Aug  5 13:26:36.805: INFO: stdout: "true"
Aug  5 13:26:36.805: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j4p5g -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8018'
Aug  5 13:26:36.896: INFO: stderr: ""
Aug  5 13:26:36.896: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug  5 13:26:36.896: INFO: validating pod update-demo-nautilus-j4p5g
Aug  5 13:26:36.900: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug  5 13:26:36.900: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug  5 13:26:36.900: INFO: update-demo-nautilus-j4p5g is verified up and running
STEP: using delete to clean up resources
Aug  5 13:26:36.900: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8018'
Aug  5 13:26:37.004: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug  5 13:26:37.004: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Aug  5 13:26:37.004: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8018'
Aug  5 13:26:37.100: INFO: stderr: "No resources found.\n"
Aug  5 13:26:37.100: INFO: stdout: ""
Aug  5 13:26:37.100: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8018 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Aug  5 13:26:37.202: INFO: stderr: ""
Aug  5 13:26:37.202: INFO: stdout: "update-demo-nautilus-dx48r\nupdate-demo-nautilus-j4p5g\n"
Aug  5 13:26:37.703: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8018'
Aug  5 13:26:37.800: INFO: stderr: "No resources found.\n"
Aug  5 13:26:37.800: INFO: stdout: ""
Aug  5 13:26:37.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8018 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Aug  5 13:26:37.903: INFO: stderr: ""
Aug  5 13:26:37.903: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 13:26:37.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8018" for this suite.
Aug  5 13:27:00.230: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:27:00.377: INFO: namespace kubectl-8018 deletion completed in 22.310676931s

• [SLOW TEST:55.816 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 13:27:00.377: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-8aead9cb-8df4-468e-9e91-d3ffff67868e
STEP: Creating a pod to test consume configMaps
Aug  5 13:27:00.451: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e62f8188-56c5-4a50-ac99-34b975425a27" in namespace "projected-6759" to be "success or failure"
Aug  5 13:27:00.508: INFO: Pod "pod-projected-configmaps-e62f8188-56c5-4a50-ac99-34b975425a27": Phase="Pending", Reason="", readiness=false. Elapsed: 57.344035ms
Aug  5 13:27:02.538: INFO: Pod "pod-projected-configmaps-e62f8188-56c5-4a50-ac99-34b975425a27": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087283492s
Aug  5 13:27:04.542: INFO: Pod "pod-projected-configmaps-e62f8188-56c5-4a50-ac99-34b975425a27": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.090976356s
STEP: Saw pod success
Aug  5 13:27:04.542: INFO: Pod "pod-projected-configmaps-e62f8188-56c5-4a50-ac99-34b975425a27" satisfied condition "success or failure"
Aug  5 13:27:04.545: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-e62f8188-56c5-4a50-ac99-34b975425a27 container projected-configmap-volume-test: 
STEP: delete the pod
Aug  5 13:27:04.573: INFO: Waiting for pod pod-projected-configmaps-e62f8188-56c5-4a50-ac99-34b975425a27 to disappear
Aug  5 13:27:04.580: INFO: Pod pod-projected-configmaps-e62f8188-56c5-4a50-ac99-34b975425a27 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 13:27:04.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6759" for this suite.
Aug  5 13:27:10.596: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:27:10.670: INFO: namespace projected-6759 deletion completed in 6.086409878s

• [SLOW TEST:10.293 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 13:27:10.671: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test env composition
Aug  5 13:27:10.792: INFO: Waiting up to 5m0s for pod "var-expansion-bf84fc93-c4f6-4243-9276-70d0bb75ed70" in namespace "var-expansion-9718" to be "success or failure"
Aug  5 13:27:10.796: INFO: Pod "var-expansion-bf84fc93-c4f6-4243-9276-70d0bb75ed70": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064032ms
Aug  5 13:27:12.801: INFO: Pod "var-expansion-bf84fc93-c4f6-4243-9276-70d0bb75ed70": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008177519s
Aug  5 13:27:14.805: INFO: Pod "var-expansion-bf84fc93-c4f6-4243-9276-70d0bb75ed70": Phase="Running", Reason="", readiness=true. Elapsed: 4.01224977s
Aug  5 13:27:16.808: INFO: Pod "var-expansion-bf84fc93-c4f6-4243-9276-70d0bb75ed70": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015678911s
STEP: Saw pod success
Aug  5 13:27:16.808: INFO: Pod "var-expansion-bf84fc93-c4f6-4243-9276-70d0bb75ed70" satisfied condition "success or failure"
Aug  5 13:27:16.811: INFO: Trying to get logs from node iruya-worker pod var-expansion-bf84fc93-c4f6-4243-9276-70d0bb75ed70 container dapi-container: 
STEP: delete the pod
Aug  5 13:27:16.875: INFO: Waiting for pod var-expansion-bf84fc93-c4f6-4243-9276-70d0bb75ed70 to disappear
Aug  5 13:27:16.881: INFO: Pod var-expansion-bf84fc93-c4f6-4243-9276-70d0bb75ed70 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 13:27:16.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-9718" for this suite.
Aug  5 13:27:22.896: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:27:22.977: INFO: namespace var-expansion-9718 deletion completed in 6.092072175s

• [SLOW TEST:12.306 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 13:27:22.977: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating cluster-info
Aug  5 13:27:23.029: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Aug  5 13:27:23.126: INFO: stderr: ""
Aug  5 13:27:23.126: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:38261\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:38261/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 13:27:23.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2144" for this suite.
Aug  5 13:27:29.154: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:27:29.223: INFO: namespace kubectl-2144 deletion completed in 6.092998468s

• [SLOW TEST:6.246 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 13:27:29.223: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-c023630b-fb4d-48b6-b9ae-d75ac174998a
STEP: Creating a pod to test consume secrets
Aug  5 13:27:29.296: INFO: Waiting up to 5m0s for pod "pod-secrets-b85b08a5-aa1e-499b-a545-45da27554ff0" in namespace "secrets-2323" to be "success or failure"
Aug  5 13:27:29.312: INFO: Pod "pod-secrets-b85b08a5-aa1e-499b-a545-45da27554ff0": Phase="Pending", Reason="", readiness=false. Elapsed: 15.714424ms
Aug  5 13:27:31.316: INFO: Pod "pod-secrets-b85b08a5-aa1e-499b-a545-45da27554ff0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020147971s
Aug  5 13:27:33.320: INFO: Pod "pod-secrets-b85b08a5-aa1e-499b-a545-45da27554ff0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024078333s
STEP: Saw pod success
Aug  5 13:27:33.320: INFO: Pod "pod-secrets-b85b08a5-aa1e-499b-a545-45da27554ff0" satisfied condition "success or failure"
Aug  5 13:27:33.323: INFO: Trying to get logs from node iruya-worker pod pod-secrets-b85b08a5-aa1e-499b-a545-45da27554ff0 container secret-volume-test: 
STEP: delete the pod
Aug  5 13:27:33.356: INFO: Waiting for pod pod-secrets-b85b08a5-aa1e-499b-a545-45da27554ff0 to disappear
Aug  5 13:27:33.360: INFO: Pod pod-secrets-b85b08a5-aa1e-499b-a545-45da27554ff0 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 13:27:33.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2323" for this suite.
Aug  5 13:27:39.411: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:27:39.482: INFO: namespace secrets-2323 deletion completed in 6.118678638s

• [SLOW TEST:10.259 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 13:27:39.483: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug  5 13:27:39.572: INFO: Waiting up to 5m0s for pod "downwardapi-volume-40a4c110-b700-459b-a8ee-0ce550c285a9" in namespace "downward-api-4544" to be "success or failure"
Aug  5 13:27:39.576: INFO: Pod "downwardapi-volume-40a4c110-b700-459b-a8ee-0ce550c285a9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.131226ms
Aug  5 13:27:41.580: INFO: Pod "downwardapi-volume-40a4c110-b700-459b-a8ee-0ce550c285a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00789012s
Aug  5 13:27:43.584: INFO: Pod "downwardapi-volume-40a4c110-b700-459b-a8ee-0ce550c285a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01220803s
STEP: Saw pod success
Aug  5 13:27:43.584: INFO: Pod "downwardapi-volume-40a4c110-b700-459b-a8ee-0ce550c285a9" satisfied condition "success or failure"
Aug  5 13:27:43.588: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-40a4c110-b700-459b-a8ee-0ce550c285a9 container client-container: 
STEP: delete the pod
Aug  5 13:27:43.613: INFO: Waiting for pod downwardapi-volume-40a4c110-b700-459b-a8ee-0ce550c285a9 to disappear
Aug  5 13:27:43.617: INFO: Pod downwardapi-volume-40a4c110-b700-459b-a8ee-0ce550c285a9 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 13:27:43.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4544" for this suite.
Aug  5 13:27:49.647: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:27:49.726: INFO: namespace downward-api-4544 deletion completed in 6.104732369s

• [SLOW TEST:10.243 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 13:27:49.726: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod test-webserver-3c116149-8648-429a-b8de-d2495dd0c936 in namespace container-probe-8512
Aug  5 13:27:53.830: INFO: Started pod test-webserver-3c116149-8648-429a-b8de-d2495dd0c936 in namespace container-probe-8512
STEP: checking the pod's current state and verifying that restartCount is present
Aug  5 13:27:53.833: INFO: Initial restart count of pod test-webserver-3c116149-8648-429a-b8de-d2495dd0c936 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 13:31:54.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8512" for this suite.
Aug  5 13:32:00.457: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:32:00.537: INFO: namespace container-probe-8512 deletion completed in 6.093161621s

• [SLOW TEST:250.811 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 13:32:00.538: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Aug  5 13:32:00.594: INFO: Waiting up to 5m0s for pod "pod-b72b43fc-857f-4c5e-920f-e10aaf1d26e2" in namespace "emptydir-4960" to be "success or failure"
Aug  5 13:32:00.599: INFO: Pod "pod-b72b43fc-857f-4c5e-920f-e10aaf1d26e2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.424851ms
Aug  5 13:32:02.644: INFO: Pod "pod-b72b43fc-857f-4c5e-920f-e10aaf1d26e2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049996007s
Aug  5 13:32:04.648: INFO: Pod "pod-b72b43fc-857f-4c5e-920f-e10aaf1d26e2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.05334938s
STEP: Saw pod success
Aug  5 13:32:04.648: INFO: Pod "pod-b72b43fc-857f-4c5e-920f-e10aaf1d26e2" satisfied condition "success or failure"
Aug  5 13:32:04.651: INFO: Trying to get logs from node iruya-worker pod pod-b72b43fc-857f-4c5e-920f-e10aaf1d26e2 container test-container: 
STEP: delete the pod
Aug  5 13:32:04.790: INFO: Waiting for pod pod-b72b43fc-857f-4c5e-920f-e10aaf1d26e2 to disappear
Aug  5 13:32:04.807: INFO: Pod pod-b72b43fc-857f-4c5e-920f-e10aaf1d26e2 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 13:32:04.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4960" for this suite.
Aug  5 13:32:10.822: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:32:10.886: INFO: namespace emptydir-4960 deletion completed in 6.075635368s

• [SLOW TEST:10.348 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 13:32:10.886: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 13:32:16.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-1953" for this suite.
Aug  5 13:32:22.660: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:32:22.738: INFO: namespace watch-1953 deletion completed in 6.173245503s

• [SLOW TEST:11.851 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 13:32:22.738: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug  5 13:32:22.823: INFO: Waiting up to 5m0s for pod "downwardapi-volume-31f6ae7c-b073-48da-a660-eb3a2d80f726" in namespace "projected-8935" to be "success or failure"
Aug  5 13:32:22.826: INFO: Pod "downwardapi-volume-31f6ae7c-b073-48da-a660-eb3a2d80f726": Phase="Pending", Reason="", readiness=false. Elapsed: 3.378352ms
Aug  5 13:32:24.830: INFO: Pod "downwardapi-volume-31f6ae7c-b073-48da-a660-eb3a2d80f726": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007389977s
Aug  5 13:32:26.835: INFO: Pod "downwardapi-volume-31f6ae7c-b073-48da-a660-eb3a2d80f726": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012018717s
STEP: Saw pod success
Aug  5 13:32:26.835: INFO: Pod "downwardapi-volume-31f6ae7c-b073-48da-a660-eb3a2d80f726" satisfied condition "success or failure"
Aug  5 13:32:26.838: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-31f6ae7c-b073-48da-a660-eb3a2d80f726 container client-container: 
STEP: delete the pod
Aug  5 13:32:26.910: INFO: Waiting for pod downwardapi-volume-31f6ae7c-b073-48da-a660-eb3a2d80f726 to disappear
Aug  5 13:32:26.917: INFO: Pod downwardapi-volume-31f6ae7c-b073-48da-a660-eb3a2d80f726 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 13:32:26.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8935" for this suite.
Aug  5 13:32:32.931: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:32:33.007: INFO: namespace projected-8935 deletion completed in 6.085901236s

• [SLOW TEST:10.269 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 13:32:33.007: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug  5 13:32:33.094: INFO: Waiting up to 5m0s for pod "downwardapi-volume-21171364-9842-43c7-b2cf-bf05285fddd8" in namespace "projected-308" to be "success or failure"
Aug  5 13:32:33.096: INFO: Pod "downwardapi-volume-21171364-9842-43c7-b2cf-bf05285fddd8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.48886ms
Aug  5 13:32:35.100: INFO: Pod "downwardapi-volume-21171364-9842-43c7-b2cf-bf05285fddd8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006766698s
Aug  5 13:32:37.137: INFO: Pod "downwardapi-volume-21171364-9842-43c7-b2cf-bf05285fddd8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042947822s
STEP: Saw pod success
Aug  5 13:32:37.137: INFO: Pod "downwardapi-volume-21171364-9842-43c7-b2cf-bf05285fddd8" satisfied condition "success or failure"
Aug  5 13:32:37.139: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-21171364-9842-43c7-b2cf-bf05285fddd8 container client-container: 
STEP: delete the pod
Aug  5 13:32:37.156: INFO: Waiting for pod downwardapi-volume-21171364-9842-43c7-b2cf-bf05285fddd8 to disappear
Aug  5 13:32:37.161: INFO: Pod downwardapi-volume-21171364-9842-43c7-b2cf-bf05285fddd8 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 13:32:37.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-308" for this suite.
Aug  5 13:32:43.177: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:32:43.261: INFO: namespace projected-308 deletion completed in 6.096940855s

• [SLOW TEST:10.254 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 13:32:43.261: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Aug  5 13:32:43.950: INFO: Pod name wrapped-volume-race-a574b005-9301-4732-9feb-63a4adf5754b: Found 0 pods out of 5
Aug  5 13:32:48.960: INFO: Pod name wrapped-volume-race-a574b005-9301-4732-9feb-63a4adf5754b: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-a574b005-9301-4732-9feb-63a4adf5754b in namespace emptydir-wrapper-8533, will wait for the garbage collector to delete the pods
Aug  5 13:33:01.050: INFO: Deleting ReplicationController wrapped-volume-race-a574b005-9301-4732-9feb-63a4adf5754b took: 17.640411ms
Aug  5 13:33:01.350: INFO: Terminating ReplicationController wrapped-volume-race-a574b005-9301-4732-9feb-63a4adf5754b pods took: 300.262097ms
STEP: Creating RC which spawns configmap-volume pods
Aug  5 13:33:46.118: INFO: Pod name wrapped-volume-race-1835cdd0-0cff-4b88-970d-b62160b3b627: Found 0 pods out of 5
Aug  5 13:33:51.126: INFO: Pod name wrapped-volume-race-1835cdd0-0cff-4b88-970d-b62160b3b627: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-1835cdd0-0cff-4b88-970d-b62160b3b627 in namespace emptydir-wrapper-8533, will wait for the garbage collector to delete the pods
Aug  5 13:34:07.225: INFO: Deleting ReplicationController wrapped-volume-race-1835cdd0-0cff-4b88-970d-b62160b3b627 took: 21.180924ms
Aug  5 13:34:07.528: INFO: Terminating ReplicationController wrapped-volume-race-1835cdd0-0cff-4b88-970d-b62160b3b627 pods took: 303.827297ms
STEP: Creating RC which spawns configmap-volume pods
Aug  5 13:34:45.183: INFO: Pod name wrapped-volume-race-4f32e745-1856-4bb7-831e-6849f380af02: Found 0 pods out of 5
Aug  5 13:34:50.201: INFO: Pod name wrapped-volume-race-4f32e745-1856-4bb7-831e-6849f380af02: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-4f32e745-1856-4bb7-831e-6849f380af02 in namespace emptydir-wrapper-8533, will wait for the garbage collector to delete the pods
Aug  5 13:35:06.323: INFO: Deleting ReplicationController wrapped-volume-race-4f32e745-1856-4bb7-831e-6849f380af02 took: 35.222451ms
Aug  5 13:35:06.624: INFO: Terminating ReplicationController wrapped-volume-race-4f32e745-1856-4bb7-831e-6849f380af02 pods took: 300.26666ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 13:35:46.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-8533" for this suite.
Aug  5 13:35:56.795: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:35:56.877: INFO: namespace emptydir-wrapper-8533 deletion completed in 10.091119494s

• [SLOW TEST:193.615 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 13:35:56.877: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Aug  5 13:35:56.967: INFO: Waiting up to 5m0s for pod "pod-6a89395b-da2e-4453-a5b9-b2c73ed8e8a3" in namespace "emptydir-1079" to be "success or failure"
Aug  5 13:35:56.970: INFO: Pod "pod-6a89395b-da2e-4453-a5b9-b2c73ed8e8a3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.233056ms
Aug  5 13:35:59.024: INFO: Pod "pod-6a89395b-da2e-4453-a5b9-b2c73ed8e8a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056467434s
Aug  5 13:36:01.027: INFO: Pod "pod-6a89395b-da2e-4453-a5b9-b2c73ed8e8a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.059973492s
STEP: Saw pod success
Aug  5 13:36:01.027: INFO: Pod "pod-6a89395b-da2e-4453-a5b9-b2c73ed8e8a3" satisfied condition "success or failure"
Aug  5 13:36:01.030: INFO: Trying to get logs from node iruya-worker2 pod pod-6a89395b-da2e-4453-a5b9-b2c73ed8e8a3 container test-container: 
STEP: delete the pod
Aug  5 13:36:01.063: INFO: Waiting for pod pod-6a89395b-da2e-4453-a5b9-b2c73ed8e8a3 to disappear
Aug  5 13:36:01.079: INFO: Pod pod-6a89395b-da2e-4453-a5b9-b2c73ed8e8a3 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 13:36:01.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1079" for this suite.
Aug  5 13:36:07.093: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:36:07.207: INFO: namespace emptydir-1079 deletion completed in 6.122299023s

• [SLOW TEST:10.330 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 13:36:07.208: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-1357
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Aug  5 13:36:07.238: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Aug  5 13:36:35.326: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.119:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-1357 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug  5 13:36:35.326: INFO: >>> kubeConfig: /root/.kube/config
I0805 13:36:35.352703       6 log.go:172] (0xc002e80630) (0xc0016f12c0) Create stream
I0805 13:36:35.352833       6 log.go:172] (0xc002e80630) (0xc0016f12c0) Stream added, broadcasting: 1
I0805 13:36:35.355193       6 log.go:172] (0xc002e80630) Reply frame received for 1
I0805 13:36:35.355230       6 log.go:172] (0xc002e80630) (0xc0016f1400) Create stream
I0805 13:36:35.355243       6 log.go:172] (0xc002e80630) (0xc0016f1400) Stream added, broadcasting: 3
I0805 13:36:35.356314       6 log.go:172] (0xc002e80630) Reply frame received for 3
I0805 13:36:35.356345       6 log.go:172] (0xc002e80630) (0xc0016f14a0) Create stream
I0805 13:36:35.356358       6 log.go:172] (0xc002e80630) (0xc0016f14a0) Stream added, broadcasting: 5
I0805 13:36:35.358381       6 log.go:172] (0xc002e80630) Reply frame received for 5
I0805 13:36:35.434217       6 log.go:172] (0xc002e80630) Data frame received for 5
I0805 13:36:35.434252       6 log.go:172] (0xc0016f14a0) (5) Data frame handling
I0805 13:36:35.434301       6 log.go:172] (0xc002e80630) Data frame received for 3
I0805 13:36:35.434341       6 log.go:172] (0xc0016f1400) (3) Data frame handling
I0805 13:36:35.434364       6 log.go:172] (0xc0016f1400) (3) Data frame sent
I0805 13:36:35.434379       6 log.go:172] (0xc002e80630) Data frame received for 3
I0805 13:36:35.434390       6 log.go:172] (0xc0016f1400) (3) Data frame handling
I0805 13:36:35.436260       6 log.go:172] (0xc002e80630) Data frame received for 1
I0805 13:36:35.436282       6 log.go:172] (0xc0016f12c0) (1) Data frame handling
I0805 13:36:35.436296       6 log.go:172] (0xc0016f12c0) (1) Data frame sent
I0805 13:36:35.436308       6 log.go:172] (0xc002e80630) (0xc0016f12c0) Stream removed, broadcasting: 1
I0805 13:36:35.436322       6 log.go:172] (0xc002e80630) Go away received
I0805 13:36:35.436455       6 log.go:172] (0xc002e80630) (0xc0016f12c0) Stream removed, broadcasting: 1
I0805 13:36:35.436485       6 log.go:172] (0xc002e80630) (0xc0016f1400) Stream removed, broadcasting: 3
I0805 13:36:35.436508       6 log.go:172] (0xc002e80630) (0xc0016f14a0) Stream removed, broadcasting: 5
Aug  5 13:36:35.436: INFO: Found all expected endpoints: [netserver-0]
Aug  5 13:36:35.439: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.94:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-1357 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug  5 13:36:35.439: INFO: >>> kubeConfig: /root/.kube/config
I0805 13:36:35.485658       6 log.go:172] (0xc002b94840) (0xc001c40960) Create stream
I0805 13:36:35.485696       6 log.go:172] (0xc002b94840) (0xc001c40960) Stream added, broadcasting: 1
I0805 13:36:35.488213       6 log.go:172] (0xc002b94840) Reply frame received for 1
I0805 13:36:35.488256       6 log.go:172] (0xc002b94840) (0xc001b139a0) Create stream
I0805 13:36:35.488271       6 log.go:172] (0xc002b94840) (0xc001b139a0) Stream added, broadcasting: 3
I0805 13:36:35.489268       6 log.go:172] (0xc002b94840) Reply frame received for 3
I0805 13:36:35.489345       6 log.go:172] (0xc002b94840) (0xc001c40a00) Create stream
I0805 13:36:35.489371       6 log.go:172] (0xc002b94840) (0xc001c40a00) Stream added, broadcasting: 5
I0805 13:36:35.490335       6 log.go:172] (0xc002b94840) Reply frame received for 5
I0805 13:36:35.559929       6 log.go:172] (0xc002b94840) Data frame received for 5
I0805 13:36:35.559960       6 log.go:172] (0xc001c40a00) (5) Data frame handling
I0805 13:36:35.560034       6 log.go:172] (0xc002b94840) Data frame received for 3
I0805 13:36:35.560075       6 log.go:172] (0xc001b139a0) (3) Data frame handling
I0805 13:36:35.560102       6 log.go:172] (0xc001b139a0) (3) Data frame sent
I0805 13:36:35.560113       6 log.go:172] (0xc002b94840) Data frame received for 3
I0805 13:36:35.560125       6 log.go:172] (0xc001b139a0) (3) Data frame handling
I0805 13:36:35.566621       6 log.go:172] (0xc002b94840) Data frame received for 1
I0805 13:36:35.566646       6 log.go:172] (0xc001c40960) (1) Data frame handling
I0805 13:36:35.566679       6 log.go:172] (0xc001c40960) (1) Data frame sent
I0805 13:36:35.566694       6 log.go:172] (0xc002b94840) (0xc001c40960) Stream removed, broadcasting: 1
I0805 13:36:35.566799       6 log.go:172] (0xc002b94840) Go away received
I0805 13:36:35.566867       6 log.go:172] (0xc002b94840) (0xc001c40960) Stream removed, broadcasting: 1
I0805 13:36:35.566896       6 log.go:172] (0xc002b94840) (0xc001b139a0) Stream removed, broadcasting: 3
I0805 13:36:35.566905       6 log.go:172] (0xc002b94840) (0xc001c40a00) Stream removed, broadcasting: 5
Aug  5 13:36:35.566: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 13:36:35.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-1357" for this suite.
Aug  5 13:36:57.618: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:36:57.738: INFO: namespace pod-network-test-1357 deletion completed in 22.168449048s

• [SLOW TEST:50.531 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 13:36:57.739: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Aug  5 13:36:57.818: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4120,SelfLink:/api/v1/namespaces/watch-4120/configmaps/e2e-watch-test-configmap-a,UID:9134caad-6466-4bbf-bfe9-29c56c9d01d0,ResourceVersion:3097811,Generation:0,CreationTimestamp:2020-08-05 13:36:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Aug  5 13:36:57.818: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4120,SelfLink:/api/v1/namespaces/watch-4120/configmaps/e2e-watch-test-configmap-a,UID:9134caad-6466-4bbf-bfe9-29c56c9d01d0,ResourceVersion:3097811,Generation:0,CreationTimestamp:2020-08-05 13:36:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Aug  5 13:37:07.826: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4120,SelfLink:/api/v1/namespaces/watch-4120/configmaps/e2e-watch-test-configmap-a,UID:9134caad-6466-4bbf-bfe9-29c56c9d01d0,ResourceVersion:3097831,Generation:0,CreationTimestamp:2020-08-05 13:36:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Aug  5 13:37:07.826: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4120,SelfLink:/api/v1/namespaces/watch-4120/configmaps/e2e-watch-test-configmap-a,UID:9134caad-6466-4bbf-bfe9-29c56c9d01d0,ResourceVersion:3097831,Generation:0,CreationTimestamp:2020-08-05 13:36:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Aug  5 13:37:17.834: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4120,SelfLink:/api/v1/namespaces/watch-4120/configmaps/e2e-watch-test-configmap-a,UID:9134caad-6466-4bbf-bfe9-29c56c9d01d0,ResourceVersion:3097852,Generation:0,CreationTimestamp:2020-08-05 13:36:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Aug  5 13:37:17.834: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4120,SelfLink:/api/v1/namespaces/watch-4120/configmaps/e2e-watch-test-configmap-a,UID:9134caad-6466-4bbf-bfe9-29c56c9d01d0,ResourceVersion:3097852,Generation:0,CreationTimestamp:2020-08-05 13:36:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Aug  5 13:37:27.839: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4120,SelfLink:/api/v1/namespaces/watch-4120/configmaps/e2e-watch-test-configmap-a,UID:9134caad-6466-4bbf-bfe9-29c56c9d01d0,ResourceVersion:3097873,Generation:0,CreationTimestamp:2020-08-05 13:36:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Aug  5 13:37:27.840: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4120,SelfLink:/api/v1/namespaces/watch-4120/configmaps/e2e-watch-test-configmap-a,UID:9134caad-6466-4bbf-bfe9-29c56c9d01d0,ResourceVersion:3097873,Generation:0,CreationTimestamp:2020-08-05 13:36:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Aug  5 13:37:37.847: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-4120,SelfLink:/api/v1/namespaces/watch-4120/configmaps/e2e-watch-test-configmap-b,UID:89323c92-b6c0-496c-bb86-f9ca7f4331fd,ResourceVersion:3097892,Generation:0,CreationTimestamp:2020-08-05 13:37:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Aug  5 13:37:37.847: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-4120,SelfLink:/api/v1/namespaces/watch-4120/configmaps/e2e-watch-test-configmap-b,UID:89323c92-b6c0-496c-bb86-f9ca7f4331fd,ResourceVersion:3097892,Generation:0,CreationTimestamp:2020-08-05 13:37:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Aug  5 13:37:47.854: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-4120,SelfLink:/api/v1/namespaces/watch-4120/configmaps/e2e-watch-test-configmap-b,UID:89323c92-b6c0-496c-bb86-f9ca7f4331fd,ResourceVersion:3097912,Generation:0,CreationTimestamp:2020-08-05 13:37:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Aug  5 13:37:47.855: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-4120,SelfLink:/api/v1/namespaces/watch-4120/configmaps/e2e-watch-test-configmap-b,UID:89323c92-b6c0-496c-bb86-f9ca7f4331fd,ResourceVersion:3097912,Generation:0,CreationTimestamp:2020-08-05 13:37:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 13:37:57.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-4120" for this suite.
Aug  5 13:38:03.880: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:38:03.956: INFO: namespace watch-4120 deletion completed in 6.09529592s

• [SLOW TEST:66.217 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 13:38:03.956: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Aug  5 13:38:04.023: INFO: Waiting up to 5m0s for pod "downward-api-eb122234-658c-448a-8ed6-a7f8b80b28b8" in namespace "downward-api-4020" to be "success or failure"
Aug  5 13:38:04.026: INFO: Pod "downward-api-eb122234-658c-448a-8ed6-a7f8b80b28b8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.238782ms
Aug  5 13:38:06.029: INFO: Pod "downward-api-eb122234-658c-448a-8ed6-a7f8b80b28b8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006663524s
Aug  5 13:38:08.033: INFO: Pod "downward-api-eb122234-658c-448a-8ed6-a7f8b80b28b8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009912273s
STEP: Saw pod success
Aug  5 13:38:08.033: INFO: Pod "downward-api-eb122234-658c-448a-8ed6-a7f8b80b28b8" satisfied condition "success or failure"
Aug  5 13:38:08.035: INFO: Trying to get logs from node iruya-worker pod downward-api-eb122234-658c-448a-8ed6-a7f8b80b28b8 container dapi-container: 
STEP: delete the pod
Aug  5 13:38:08.058: INFO: Waiting for pod downward-api-eb122234-658c-448a-8ed6-a7f8b80b28b8 to disappear
Aug  5 13:38:08.062: INFO: Pod downward-api-eb122234-658c-448a-8ed6-a7f8b80b28b8 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 13:38:08.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4020" for this suite.
Aug  5 13:38:14.083: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:38:14.152: INFO: namespace downward-api-4020 deletion completed in 6.087461846s

• [SLOW TEST:10.196 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 13:38:14.153: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 13:38:40.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-8491" for this suite.
Aug  5 13:38:46.495: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:38:46.569: INFO: namespace namespaces-8491 deletion completed in 6.091668215s
STEP: Destroying namespace "nsdeletetest-2974" for this suite.
Aug  5 13:38:46.571: INFO: Namespace nsdeletetest-2974 was already deleted
STEP: Destroying namespace "nsdeletetest-9088" for this suite.
Aug  5 13:38:52.585: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:38:52.718: INFO: namespace nsdeletetest-9088 deletion completed in 6.146922149s

• [SLOW TEST:38.566 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 13:38:52.719: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap that has name configmap-test-emptyKey-cf2132bb-686c-44ae-8d39-c3fb4e4bca11
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 13:38:52.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6264" for this suite.
Aug  5 13:38:58.782: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:38:58.859: INFO: namespace configmap-6264 deletion completed in 6.101171811s

• [SLOW TEST:6.141 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 13:38:58.860: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the initial replication controller
Aug  5 13:38:58.887: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9653'
Aug  5 13:39:01.884: INFO: stderr: ""
Aug  5 13:39:01.884: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug  5 13:39:01.884: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9653'
Aug  5 13:39:02.009: INFO: stderr: ""
Aug  5 13:39:02.009: INFO: stdout: "update-demo-nautilus-ftstp update-demo-nautilus-t24c5 "
Aug  5 13:39:02.009: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ftstp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9653'
Aug  5 13:39:02.125: INFO: stderr: ""
Aug  5 13:39:02.125: INFO: stdout: ""
Aug  5 13:39:02.125: INFO: update-demo-nautilus-ftstp is created but not running
Aug  5 13:39:07.125: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9653'
Aug  5 13:39:07.231: INFO: stderr: ""
Aug  5 13:39:07.231: INFO: stdout: "update-demo-nautilus-ftstp update-demo-nautilus-t24c5 "
Aug  5 13:39:07.231: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ftstp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9653'
Aug  5 13:39:07.327: INFO: stderr: ""
Aug  5 13:39:07.327: INFO: stdout: "true"
Aug  5 13:39:07.327: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ftstp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9653'
Aug  5 13:39:07.413: INFO: stderr: ""
Aug  5 13:39:07.413: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug  5 13:39:07.413: INFO: validating pod update-demo-nautilus-ftstp
Aug  5 13:39:07.417: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug  5 13:39:07.417: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug  5 13:39:07.417: INFO: update-demo-nautilus-ftstp is verified up and running
Aug  5 13:39:07.417: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-t24c5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9653'
Aug  5 13:39:07.507: INFO: stderr: ""
Aug  5 13:39:07.507: INFO: stdout: "true"
Aug  5 13:39:07.507: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-t24c5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9653'
Aug  5 13:39:07.607: INFO: stderr: ""
Aug  5 13:39:07.607: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug  5 13:39:07.607: INFO: validating pod update-demo-nautilus-t24c5
Aug  5 13:39:07.610: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug  5 13:39:07.610: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug  5 13:39:07.610: INFO: update-demo-nautilus-t24c5 is verified up and running
STEP: rolling-update to new replication controller
Aug  5 13:39:07.612: INFO: scanned /root for discovery docs: 
Aug  5 13:39:07.612: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-9653'
Aug  5 13:39:30.173: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Aug  5 13:39:30.173: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug  5 13:39:30.174: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9653'
Aug  5 13:39:30.282: INFO: stderr: ""
Aug  5 13:39:30.282: INFO: stdout: "update-demo-kitten-vhql6 update-demo-kitten-wwg9x "
Aug  5 13:39:30.282: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-vhql6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9653'
Aug  5 13:39:30.372: INFO: stderr: ""
Aug  5 13:39:30.372: INFO: stdout: "true"
Aug  5 13:39:30.372: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-vhql6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9653'
Aug  5 13:39:30.465: INFO: stderr: ""
Aug  5 13:39:30.465: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Aug  5 13:39:30.465: INFO: validating pod update-demo-kitten-vhql6
Aug  5 13:39:30.469: INFO: got data: {
  "image": "kitten.jpg"
}

Aug  5 13:39:30.469: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Aug  5 13:39:30.469: INFO: update-demo-kitten-vhql6 is verified up and running
Aug  5 13:39:30.469: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-wwg9x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9653'
Aug  5 13:39:30.557: INFO: stderr: ""
Aug  5 13:39:30.557: INFO: stdout: "true"
Aug  5 13:39:30.557: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-wwg9x -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9653'
Aug  5 13:39:30.642: INFO: stderr: ""
Aug  5 13:39:30.642: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Aug  5 13:39:30.642: INFO: validating pod update-demo-kitten-wwg9x
Aug  5 13:39:30.646: INFO: got data: {
  "image": "kitten.jpg"
}

Aug  5 13:39:30.646: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Aug  5 13:39:30.646: INFO: update-demo-kitten-wwg9x is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 13:39:30.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9653" for this suite.
Aug  5 13:39:54.715: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:39:54.802: INFO: namespace kubectl-9653 deletion completed in 24.15342218s

• [SLOW TEST:55.943 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 13:39:54.803: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Aug  5 13:39:54.868: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug  5 13:39:54.881: INFO: Waiting for terminating namespaces to be deleted...
Aug  5 13:39:54.883: INFO: 
Logging pods the kubelet thinks is on node iruya-worker before test
Aug  5 13:39:54.887: INFO: kindnet-k7tjm from kube-system started at 2020-07-19 21:16:08 +0000 UTC (1 container statuses recorded)
Aug  5 13:39:54.887: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug  5 13:39:54.887: INFO: kube-proxy-jzrnl from kube-system started at 2020-07-19 21:16:08 +0000 UTC (1 container statuses recorded)
Aug  5 13:39:54.887: INFO: 	Container kube-proxy ready: true, restart count 0
Aug  5 13:39:54.887: INFO: 
Logging pods the kubelet thinks is on node iruya-worker2 before test
Aug  5 13:39:54.894: INFO: kube-proxy-9ktgx from kube-system started at 2020-07-19 21:16:10 +0000 UTC (1 container statuses recorded)
Aug  5 13:39:54.894: INFO: 	Container kube-proxy ready: true, restart count 0
Aug  5 13:39:54.894: INFO: kindnet-8kg9z from kube-system started at 2020-07-19 21:16:09 +0000 UTC (1 container statuses recorded)
Aug  5 13:39:54.894: INFO: 	Container kindnet-cni ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.162862e2fb5f0dad], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 13:39:55.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-898" for this suite.
Aug  5 13:40:01.947: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:40:02.027: INFO: namespace sched-pred-898 deletion completed in 6.096359475s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:7.224 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 13:40:02.028: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: delete a job
STEP: deleting Job.batch foo in namespace job-4938, will wait for the garbage collector to delete the pods
Aug  5 13:40:08.152: INFO: Deleting Job.batch foo took: 7.038823ms
Aug  5 13:40:08.453: INFO: Terminating Job.batch foo pods took: 300.235619ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 13:40:45.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-4938" for this suite.
Aug  5 13:40:51.072: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:40:51.142: INFO: namespace job-4938 deletion completed in 6.080729275s

• [SLOW TEST:49.114 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 13:40:51.142: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override arguments
Aug  5 13:40:51.216: INFO: Waiting up to 5m0s for pod "client-containers-a96e46cc-67b8-4d87-af57-20902f4e80e6" in namespace "containers-854" to be "success or failure"
Aug  5 13:40:51.266: INFO: Pod "client-containers-a96e46cc-67b8-4d87-af57-20902f4e80e6": Phase="Pending", Reason="", readiness=false. Elapsed: 49.723617ms
Aug  5 13:40:53.270: INFO: Pod "client-containers-a96e46cc-67b8-4d87-af57-20902f4e80e6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053913206s
Aug  5 13:40:55.274: INFO: Pod "client-containers-a96e46cc-67b8-4d87-af57-20902f4e80e6": Phase="Running", Reason="", readiness=true. Elapsed: 4.057931975s
Aug  5 13:40:57.278: INFO: Pod "client-containers-a96e46cc-67b8-4d87-af57-20902f4e80e6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.06237273s
STEP: Saw pod success
Aug  5 13:40:57.278: INFO: Pod "client-containers-a96e46cc-67b8-4d87-af57-20902f4e80e6" satisfied condition "success or failure"
Aug  5 13:40:57.282: INFO: Trying to get logs from node iruya-worker pod client-containers-a96e46cc-67b8-4d87-af57-20902f4e80e6 container test-container: 
STEP: delete the pod
Aug  5 13:40:57.299: INFO: Waiting for pod client-containers-a96e46cc-67b8-4d87-af57-20902f4e80e6 to disappear
Aug  5 13:40:57.303: INFO: Pod client-containers-a96e46cc-67b8-4d87-af57-20902f4e80e6 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 13:40:57.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-854" for this suite.
Aug  5 13:41:03.344: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:41:03.442: INFO: namespace containers-854 deletion completed in 6.117350807s

• [SLOW TEST:12.299 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 13:41:03.442: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-downwardapi-ljxk
STEP: Creating a pod to test atomic-volume-subpath
Aug  5 13:41:03.505: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-ljxk" in namespace "subpath-3801" to be "success or failure"
Aug  5 13:41:03.519: INFO: Pod "pod-subpath-test-downwardapi-ljxk": Phase="Pending", Reason="", readiness=false. Elapsed: 13.492231ms
Aug  5 13:41:05.524: INFO: Pod "pod-subpath-test-downwardapi-ljxk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01800287s
Aug  5 13:41:07.528: INFO: Pod "pod-subpath-test-downwardapi-ljxk": Phase="Running", Reason="", readiness=true. Elapsed: 4.022362917s
Aug  5 13:41:09.532: INFO: Pod "pod-subpath-test-downwardapi-ljxk": Phase="Running", Reason="", readiness=true. Elapsed: 6.026569344s
Aug  5 13:41:11.536: INFO: Pod "pod-subpath-test-downwardapi-ljxk": Phase="Running", Reason="", readiness=true. Elapsed: 8.03064301s
Aug  5 13:41:13.541: INFO: Pod "pod-subpath-test-downwardapi-ljxk": Phase="Running", Reason="", readiness=true. Elapsed: 10.035351904s
Aug  5 13:41:15.545: INFO: Pod "pod-subpath-test-downwardapi-ljxk": Phase="Running", Reason="", readiness=true. Elapsed: 12.039435486s
Aug  5 13:41:17.549: INFO: Pod "pod-subpath-test-downwardapi-ljxk": Phase="Running", Reason="", readiness=true. Elapsed: 14.042989706s
Aug  5 13:41:19.552: INFO: Pod "pod-subpath-test-downwardapi-ljxk": Phase="Running", Reason="", readiness=true. Elapsed: 16.046879225s
Aug  5 13:41:21.557: INFO: Pod "pod-subpath-test-downwardapi-ljxk": Phase="Running", Reason="", readiness=true. Elapsed: 18.051154232s
Aug  5 13:41:23.560: INFO: Pod "pod-subpath-test-downwardapi-ljxk": Phase="Running", Reason="", readiness=true. Elapsed: 20.05452494s
Aug  5 13:41:25.565: INFO: Pod "pod-subpath-test-downwardapi-ljxk": Phase="Running", Reason="", readiness=true. Elapsed: 22.05899327s
Aug  5 13:41:27.568: INFO: Pod "pod-subpath-test-downwardapi-ljxk": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.062114355s
STEP: Saw pod success
Aug  5 13:41:27.568: INFO: Pod "pod-subpath-test-downwardapi-ljxk" satisfied condition "success or failure"
Aug  5 13:41:27.570: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-downwardapi-ljxk container test-container-subpath-downwardapi-ljxk: 
STEP: delete the pod
Aug  5 13:41:27.606: INFO: Waiting for pod pod-subpath-test-downwardapi-ljxk to disappear
Aug  5 13:41:27.640: INFO: Pod pod-subpath-test-downwardapi-ljxk no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-ljxk
Aug  5 13:41:27.640: INFO: Deleting pod "pod-subpath-test-downwardapi-ljxk" in namespace "subpath-3801"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 13:41:27.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-3801" for this suite.
Aug  5 13:41:33.656: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:41:33.728: INFO: namespace subpath-3801 deletion completed in 6.082124643s

• [SLOW TEST:30.286 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 13:41:33.728: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod busybox-1ce69ad4-f429-44e2-97b3-a3f1628527d7 in namespace container-probe-1751
Aug  5 13:41:37.847: INFO: Started pod busybox-1ce69ad4-f429-44e2-97b3-a3f1628527d7 in namespace container-probe-1751
STEP: checking the pod's current state and verifying that restartCount is present
Aug  5 13:41:37.849: INFO: Initial restart count of pod busybox-1ce69ad4-f429-44e2-97b3-a3f1628527d7 is 0
Aug  5 13:42:27.949: INFO: Restart count of pod container-probe-1751/busybox-1ce69ad4-f429-44e2-97b3-a3f1628527d7 is now 1 (50.099873698s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 13:42:27.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1751" for this suite.
Aug  5 13:42:33.980: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:42:34.052: INFO: namespace container-probe-1751 deletion completed in 6.085987592s

• [SLOW TEST:60.324 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 13:42:34.052: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod busybox-25bb2992-55e0-4bb1-96c1-3ee126825829 in namespace container-probe-1657
Aug  5 13:42:38.179: INFO: Started pod busybox-25bb2992-55e0-4bb1-96c1-3ee126825829 in namespace container-probe-1657
STEP: checking the pod's current state and verifying that restartCount is present
Aug  5 13:42:38.182: INFO: Initial restart count of pod busybox-25bb2992-55e0-4bb1-96c1-3ee126825829 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 13:46:39.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1657" for this suite.
Aug  5 13:46:45.522: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:46:45.582: INFO: namespace container-probe-1657 deletion completed in 6.090848229s

• [SLOW TEST:251.530 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 13:46:45.582: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 13:46:50.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-360" for this suite.
Aug  5 13:47:12.732: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:47:12.808: INFO: namespace replication-controller-360 deletion completed in 22.087649536s

• [SLOW TEST:27.226 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 13:47:12.808: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Aug  5 13:47:12.858: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 13:47:18.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-1311" for this suite.
Aug  5 13:47:24.745: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:47:24.822: INFO: namespace init-container-1311 deletion completed in 6.090575222s

• [SLOW TEST:12.014 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 13:47:24.823: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Aug  5 13:47:24.876: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-789'
Aug  5 13:47:24.980: INFO: stderr: ""
Aug  5 13:47:24.980: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690
Aug  5 13:47:24.989: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-789'
Aug  5 13:47:28.759: INFO: stderr: ""
Aug  5 13:47:28.759: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 13:47:28.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-789" for this suite.
Aug  5 13:47:34.776: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:47:34.856: INFO: namespace kubectl-789 deletion completed in 6.093452986s

• [SLOW TEST:10.033 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 13:47:34.857: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Aug  5 13:47:43.007: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug  5 13:47:43.017: INFO: Pod pod-with-prestop-http-hook still exists
Aug  5 13:47:45.017: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug  5 13:47:45.022: INFO: Pod pod-with-prestop-http-hook still exists
Aug  5 13:47:47.017: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug  5 13:47:47.022: INFO: Pod pod-with-prestop-http-hook still exists
Aug  5 13:47:49.017: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug  5 13:47:49.022: INFO: Pod pod-with-prestop-http-hook still exists
Aug  5 13:47:51.017: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug  5 13:47:51.022: INFO: Pod pod-with-prestop-http-hook still exists
Aug  5 13:47:53.017: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug  5 13:47:53.021: INFO: Pod pod-with-prestop-http-hook still exists
Aug  5 13:47:55.017: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug  5 13:47:55.022: INFO: Pod pod-with-prestop-http-hook still exists
Aug  5 13:47:57.017: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug  5 13:47:57.022: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 13:47:57.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-5756" for this suite.
Aug  5 13:48:19.045: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:48:19.117: INFO: namespace container-lifecycle-hook-5756 deletion completed in 22.083254471s

• [SLOW TEST:44.260 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 13:48:19.118: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug  5 13:48:19.277: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"01b409d4-115c-429d-893c-c23713676214", Controller:(*bool)(0xc0013b5182), BlockOwnerDeletion:(*bool)(0xc0013b5183)}}
Aug  5 13:48:19.296: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"073f7393-ff4c-4e8c-8226-3419f85dd21c", Controller:(*bool)(0xc00261e882), BlockOwnerDeletion:(*bool)(0xc00261e883)}}
Aug  5 13:48:19.313: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"9239447a-df1c-4ec7-8aea-2c46a3c806d2", Controller:(*bool)(0xc0013b542a), BlockOwnerDeletion:(*bool)(0xc0013b542b)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 13:48:24.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1913" for this suite.
Aug  5 13:48:30.418: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:48:30.494: INFO: namespace gc-1913 deletion completed in 6.091323461s

• [SLOW TEST:11.376 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 13:48:30.495: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Starting the proxy
Aug  5 13:48:30.541: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix659518009/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 13:48:30.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5724" for this suite.
Aug  5 13:48:36.632: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:48:36.713: INFO: namespace kubectl-5724 deletion completed in 6.096979916s

• [SLOW TEST:6.218 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support --unix-socket=/path  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 13:48:36.714: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's args
Aug  5 13:48:36.774: INFO: Waiting up to 5m0s for pod "var-expansion-a705d943-bfb1-4cea-b667-4008e1e880f2" in namespace "var-expansion-4335" to be "success or failure"
Aug  5 13:48:36.778: INFO: Pod "var-expansion-a705d943-bfb1-4cea-b667-4008e1e880f2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.787482ms
Aug  5 13:48:38.782: INFO: Pod "var-expansion-a705d943-bfb1-4cea-b667-4008e1e880f2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007667249s
Aug  5 13:48:40.786: INFO: Pod "var-expansion-a705d943-bfb1-4cea-b667-4008e1e880f2": Phase="Running", Reason="", readiness=true. Elapsed: 4.012111885s
Aug  5 13:48:42.790: INFO: Pod "var-expansion-a705d943-bfb1-4cea-b667-4008e1e880f2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015544242s
STEP: Saw pod success
Aug  5 13:48:42.790: INFO: Pod "var-expansion-a705d943-bfb1-4cea-b667-4008e1e880f2" satisfied condition "success or failure"
Aug  5 13:48:42.792: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-a705d943-bfb1-4cea-b667-4008e1e880f2 container dapi-container: 
STEP: delete the pod
Aug  5 13:48:42.810: INFO: Waiting for pod var-expansion-a705d943-bfb1-4cea-b667-4008e1e880f2 to disappear
Aug  5 13:48:42.814: INFO: Pod var-expansion-a705d943-bfb1-4cea-b667-4008e1e880f2 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 13:48:42.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-4335" for this suite.
Aug  5 13:48:48.834: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:48:48.963: INFO: namespace var-expansion-4335 deletion completed in 6.146132877s

• [SLOW TEST:12.249 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 13:48:48.964: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Aug  5 13:48:59.064: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9100 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug  5 13:48:59.064: INFO: >>> kubeConfig: /root/.kube/config
I0805 13:48:59.108897       6 log.go:172] (0xc00144a630) (0xc0025f2500) Create stream
I0805 13:48:59.108946       6 log.go:172] (0xc00144a630) (0xc0025f2500) Stream added, broadcasting: 1
I0805 13:48:59.111550       6 log.go:172] (0xc00144a630) Reply frame received for 1
I0805 13:48:59.111585       6 log.go:172] (0xc00144a630) (0xc0025f25a0) Create stream
I0805 13:48:59.111596       6 log.go:172] (0xc00144a630) (0xc0025f25a0) Stream added, broadcasting: 3
I0805 13:48:59.112872       6 log.go:172] (0xc00144a630) Reply frame received for 3
I0805 13:48:59.112928       6 log.go:172] (0xc00144a630) (0xc0001c8280) Create stream
I0805 13:48:59.112953       6 log.go:172] (0xc00144a630) (0xc0001c8280) Stream added, broadcasting: 5
I0805 13:48:59.113979       6 log.go:172] (0xc00144a630) Reply frame received for 5
I0805 13:48:59.204248       6 log.go:172] (0xc00144a630) Data frame received for 5
I0805 13:48:59.204284       6 log.go:172] (0xc0001c8280) (5) Data frame handling
I0805 13:48:59.204304       6 log.go:172] (0xc00144a630) Data frame received for 3
I0805 13:48:59.204315       6 log.go:172] (0xc0025f25a0) (3) Data frame handling
I0805 13:48:59.204332       6 log.go:172] (0xc0025f25a0) (3) Data frame sent
I0805 13:48:59.204342       6 log.go:172] (0xc00144a630) Data frame received for 3
I0805 13:48:59.204350       6 log.go:172] (0xc0025f25a0) (3) Data frame handling
I0805 13:48:59.207077       6 log.go:172] (0xc00144a630) Data frame received for 1
I0805 13:48:59.207090       6 log.go:172] (0xc0025f2500) (1) Data frame handling
I0805 13:48:59.207098       6 log.go:172] (0xc0025f2500) (1) Data frame sent
I0805 13:48:59.208487       6 log.go:172] (0xc00144a630) (0xc0025f2500) Stream removed, broadcasting: 1
I0805 13:48:59.208589       6 log.go:172] (0xc00144a630) (0xc0025f2500) Stream removed, broadcasting: 1
I0805 13:48:59.208604       6 log.go:172] (0xc00144a630) (0xc0025f25a0) Stream removed, broadcasting: 3
I0805 13:48:59.209162       6 log.go:172] (0xc00144a630) (0xc0001c8280) Stream removed, broadcasting: 5
Aug  5 13:48:59.209: INFO: Exec stderr: ""
Aug  5 13:48:59.209: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9100 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug  5 13:48:59.209: INFO: >>> kubeConfig: /root/.kube/config
I0805 13:48:59.240662       6 log.go:172] (0xc002f4f340) (0xc0001c88c0) Create stream
I0805 13:48:59.240687       6 log.go:172] (0xc002f4f340) (0xc0001c88c0) Stream added, broadcasting: 1
I0805 13:48:59.242736       6 log.go:172] (0xc002f4f340) Reply frame received for 1
I0805 13:48:59.242775       6 log.go:172] (0xc002f4f340) (0xc0025f2640) Create stream
I0805 13:48:59.242790       6 log.go:172] (0xc002f4f340) (0xc0025f2640) Stream added, broadcasting: 3
I0805 13:48:59.243635       6 log.go:172] (0xc002f4f340) Reply frame received for 3
I0805 13:48:59.243664       6 log.go:172] (0xc002f4f340) (0xc0001c8aa0) Create stream
I0805 13:48:59.243673       6 log.go:172] (0xc002f4f340) (0xc0001c8aa0) Stream added, broadcasting: 5
I0805 13:48:59.244377       6 log.go:172] (0xc002f4f340) Reply frame received for 5
I0805 13:48:59.328262       6 log.go:172] (0xc002f4f340) Data frame received for 3
I0805 13:48:59.328331       6 log.go:172] (0xc0025f2640) (3) Data frame handling
I0805 13:48:59.328347       6 log.go:172] (0xc0025f2640) (3) Data frame sent
I0805 13:48:59.328366       6 log.go:172] (0xc002f4f340) Data frame received for 3
I0805 13:48:59.328377       6 log.go:172] (0xc0025f2640) (3) Data frame handling
I0805 13:48:59.328416       6 log.go:172] (0xc002f4f340) Data frame received for 5
I0805 13:48:59.328444       6 log.go:172] (0xc0001c8aa0) (5) Data frame handling
I0805 13:48:59.329963       6 log.go:172] (0xc002f4f340) Data frame received for 1
I0805 13:48:59.330004       6 log.go:172] (0xc0001c88c0) (1) Data frame handling
I0805 13:48:59.330033       6 log.go:172] (0xc0001c88c0) (1) Data frame sent
I0805 13:48:59.330053       6 log.go:172] (0xc002f4f340) (0xc0001c88c0) Stream removed, broadcasting: 1
I0805 13:48:59.330107       6 log.go:172] (0xc002f4f340) Go away received
I0805 13:48:59.330199       6 log.go:172] (0xc002f4f340) (0xc0001c88c0) Stream removed, broadcasting: 1
I0805 13:48:59.330234       6 log.go:172] (0xc002f4f340) (0xc0025f2640) Stream removed, broadcasting: 3
I0805 13:48:59.330277       6 log.go:172] (0xc002f4f340) (0xc0001c8aa0) Stream removed, broadcasting: 5
Aug  5 13:48:59.330: INFO: Exec stderr: ""
Aug  5 13:48:59.330: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9100 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug  5 13:48:59.330: INFO: >>> kubeConfig: /root/.kube/config
I0805 13:48:59.362340       6 log.go:172] (0xc00144b8c0) (0xc0025f2a00) Create stream
I0805 13:48:59.362384       6 log.go:172] (0xc00144b8c0) (0xc0025f2a00) Stream added, broadcasting: 1
I0805 13:48:59.364418       6 log.go:172] (0xc00144b8c0) Reply frame received for 1
I0805 13:48:59.364453       6 log.go:172] (0xc00144b8c0) (0xc0023c3ea0) Create stream
I0805 13:48:59.364467       6 log.go:172] (0xc00144b8c0) (0xc0023c3ea0) Stream added, broadcasting: 3
I0805 13:48:59.365463       6 log.go:172] (0xc00144b8c0) Reply frame received for 3
I0805 13:48:59.365519       6 log.go:172] (0xc00144b8c0) (0xc0001c8be0) Create stream
I0805 13:48:59.365544       6 log.go:172] (0xc00144b8c0) (0xc0001c8be0) Stream added, broadcasting: 5
I0805 13:48:59.366473       6 log.go:172] (0xc00144b8c0) Reply frame received for 5
I0805 13:48:59.432004       6 log.go:172] (0xc00144b8c0) Data frame received for 3
I0805 13:48:59.432042       6 log.go:172] (0xc0023c3ea0) (3) Data frame handling
I0805 13:48:59.432065       6 log.go:172] (0xc0023c3ea0) (3) Data frame sent
I0805 13:48:59.432086       6 log.go:172] (0xc00144b8c0) Data frame received for 3
I0805 13:48:59.432094       6 log.go:172] (0xc0023c3ea0) (3) Data frame handling
I0805 13:48:59.432224       6 log.go:172] (0xc00144b8c0) Data frame received for 5
I0805 13:48:59.432260       6 log.go:172] (0xc0001c8be0) (5) Data frame handling
I0805 13:48:59.433695       6 log.go:172] (0xc00144b8c0) Data frame received for 1
I0805 13:48:59.433709       6 log.go:172] (0xc0025f2a00) (1) Data frame handling
I0805 13:48:59.433722       6 log.go:172] (0xc0025f2a00) (1) Data frame sent
I0805 13:48:59.433732       6 log.go:172] (0xc00144b8c0) (0xc0025f2a00) Stream removed, broadcasting: 1
I0805 13:48:59.433783       6 log.go:172] (0xc00144b8c0) Go away received
I0805 13:48:59.433812       6 log.go:172] (0xc00144b8c0) (0xc0025f2a00) Stream removed, broadcasting: 1
I0805 13:48:59.433885       6 log.go:172] Streams opened: 2, map[spdy.StreamId]*spdystream.Stream{0x3:(*spdystream.Stream)(0xc0023c3ea0), 0x5:(*spdystream.Stream)(0xc0001c8be0)}
I0805 13:48:59.433970       6 log.go:172] (0xc00144b8c0) (0xc0023c3ea0) Stream removed, broadcasting: 3
I0805 13:48:59.434010       6 log.go:172] (0xc00144b8c0) (0xc0001c8be0) Stream removed, broadcasting: 5
Aug  5 13:48:59.434: INFO: Exec stderr: ""
Aug  5 13:48:59.434: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9100 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug  5 13:48:59.434: INFO: >>> kubeConfig: /root/.kube/config
I0805 13:48:59.467985       6 log.go:172] (0xc001ece840) (0xc0001c9040) Create stream
I0805 13:48:59.468015       6 log.go:172] (0xc001ece840) (0xc0001c9040) Stream added, broadcasting: 1
I0805 13:48:59.471833       6 log.go:172] (0xc001ece840) Reply frame received for 1
I0805 13:48:59.471875       6 log.go:172] (0xc001ece840) (0xc0019d60a0) Create stream
I0805 13:48:59.471887       6 log.go:172] (0xc001ece840) (0xc0019d60a0) Stream added, broadcasting: 3
I0805 13:48:59.472937       6 log.go:172] (0xc001ece840) Reply frame received for 3
I0805 13:48:59.472981       6 log.go:172] (0xc001ece840) (0xc0001c90e0) Create stream
I0805 13:48:59.472996       6 log.go:172] (0xc001ece840) (0xc0001c90e0) Stream added, broadcasting: 5
I0805 13:48:59.473910       6 log.go:172] (0xc001ece840) Reply frame received for 5
I0805 13:48:59.536467       6 log.go:172] (0xc001ece840) Data frame received for 5
I0805 13:48:59.536505       6 log.go:172] (0xc0001c90e0) (5) Data frame handling
I0805 13:48:59.536534       6 log.go:172] (0xc001ece840) Data frame received for 3
I0805 13:48:59.536547       6 log.go:172] (0xc0019d60a0) (3) Data frame handling
I0805 13:48:59.536560       6 log.go:172] (0xc0019d60a0) (3) Data frame sent
I0805 13:48:59.536571       6 log.go:172] (0xc001ece840) Data frame received for 3
I0805 13:48:59.536579       6 log.go:172] (0xc0019d60a0) (3) Data frame handling
I0805 13:48:59.537858       6 log.go:172] (0xc001ece840) Data frame received for 1
I0805 13:48:59.537892       6 log.go:172] (0xc0001c9040) (1) Data frame handling
I0805 13:48:59.537905       6 log.go:172] (0xc0001c9040) (1) Data frame sent
I0805 13:48:59.537925       6 log.go:172] (0xc001ece840) (0xc0001c9040) Stream removed, broadcasting: 1
I0805 13:48:59.537943       6 log.go:172] (0xc001ece840) Go away received
I0805 13:48:59.538120       6 log.go:172] (0xc001ece840) (0xc0001c9040) Stream removed, broadcasting: 1
I0805 13:48:59.538182       6 log.go:172] (0xc001ece840) (0xc0019d60a0) Stream removed, broadcasting: 3
I0805 13:48:59.538219       6 log.go:172] (0xc001ece840) (0xc0001c90e0) Stream removed, broadcasting: 5
Aug  5 13:48:59.538: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Aug  5 13:48:59.538: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9100 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug  5 13:48:59.538: INFO: >>> kubeConfig: /root/.kube/config
I0805 13:48:59.566015       6 log.go:172] (0xc001f8e210) (0xc0025f2dc0) Create stream
I0805 13:48:59.566043       6 log.go:172] (0xc001f8e210) (0xc0025f2dc0) Stream added, broadcasting: 1
I0805 13:48:59.568832       6 log.go:172] (0xc001f8e210) Reply frame received for 1
I0805 13:48:59.568887       6 log.go:172] (0xc001f8e210) (0xc000fbe320) Create stream
I0805 13:48:59.568905       6 log.go:172] (0xc001f8e210) (0xc000fbe320) Stream added, broadcasting: 3
I0805 13:48:59.569839       6 log.go:172] (0xc001f8e210) Reply frame received for 3
I0805 13:48:59.569882       6 log.go:172] (0xc001f8e210) (0xc0025f2e60) Create stream
I0805 13:48:59.569894       6 log.go:172] (0xc001f8e210) (0xc0025f2e60) Stream added, broadcasting: 5
I0805 13:48:59.570913       6 log.go:172] (0xc001f8e210) Reply frame received for 5
I0805 13:48:59.644347       6 log.go:172] (0xc001f8e210) Data frame received for 5
I0805 13:48:59.644393       6 log.go:172] (0xc0025f2e60) (5) Data frame handling
I0805 13:48:59.644422       6 log.go:172] (0xc001f8e210) Data frame received for 3
I0805 13:48:59.644436       6 log.go:172] (0xc000fbe320) (3) Data frame handling
I0805 13:48:59.644456       6 log.go:172] (0xc000fbe320) (3) Data frame sent
I0805 13:48:59.644472       6 log.go:172] (0xc001f8e210) Data frame received for 3
I0805 13:48:59.644484       6 log.go:172] (0xc000fbe320) (3) Data frame handling
I0805 13:48:59.645832       6 log.go:172] (0xc001f8e210) Data frame received for 1
I0805 13:48:59.645877       6 log.go:172] (0xc0025f2dc0) (1) Data frame handling
I0805 13:48:59.645897       6 log.go:172] (0xc0025f2dc0) (1) Data frame sent
I0805 13:48:59.645913       6 log.go:172] (0xc001f8e210) (0xc0025f2dc0) Stream removed, broadcasting: 1
I0805 13:48:59.645939       6 log.go:172] (0xc001f8e210) Go away received
I0805 13:48:59.646068       6 log.go:172] (0xc001f8e210) (0xc0025f2dc0) Stream removed, broadcasting: 1
I0805 13:48:59.646091       6 log.go:172] (0xc001f8e210) (0xc000fbe320) Stream removed, broadcasting: 3
I0805 13:48:59.646104       6 log.go:172] (0xc001f8e210) (0xc0025f2e60) Stream removed, broadcasting: 5
Aug  5 13:48:59.646: INFO: Exec stderr: ""
Aug  5 13:48:59.646: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9100 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug  5 13:48:59.646: INFO: >>> kubeConfig: /root/.kube/config
I0805 13:48:59.684368       6 log.go:172] (0xc00133d080) (0xc0019d6500) Create stream
I0805 13:48:59.684395       6 log.go:172] (0xc00133d080) (0xc0019d6500) Stream added, broadcasting: 1
I0805 13:48:59.687460       6 log.go:172] (0xc00133d080) Reply frame received for 1
I0805 13:48:59.687535       6 log.go:172] (0xc00133d080) (0xc000fbe3c0) Create stream
I0805 13:48:59.687556       6 log.go:172] (0xc00133d080) (0xc000fbe3c0) Stream added, broadcasting: 3
I0805 13:48:59.688829       6 log.go:172] (0xc00133d080) Reply frame received for 3
I0805 13:48:59.688894       6 log.go:172] (0xc00133d080) (0xc000fbe460) Create stream
I0805 13:48:59.688906       6 log.go:172] (0xc00133d080) (0xc000fbe460) Stream added, broadcasting: 5
I0805 13:48:59.690007       6 log.go:172] (0xc00133d080) Reply frame received for 5
I0805 13:48:59.754935       6 log.go:172] (0xc00133d080) Data frame received for 3
I0805 13:48:59.754972       6 log.go:172] (0xc000fbe3c0) (3) Data frame handling
I0805 13:48:59.754992       6 log.go:172] (0xc000fbe3c0) (3) Data frame sent
I0805 13:48:59.755014       6 log.go:172] (0xc00133d080) Data frame received for 3
I0805 13:48:59.755032       6 log.go:172] (0xc000fbe3c0) (3) Data frame handling
I0805 13:48:59.755073       6 log.go:172] (0xc00133d080) Data frame received for 5
I0805 13:48:59.755121       6 log.go:172] (0xc000fbe460) (5) Data frame handling
I0805 13:48:59.756564       6 log.go:172] (0xc00133d080) Data frame received for 1
I0805 13:48:59.756593       6 log.go:172] (0xc0019d6500) (1) Data frame handling
I0805 13:48:59.756604       6 log.go:172] (0xc0019d6500) (1) Data frame sent
I0805 13:48:59.756638       6 log.go:172] (0xc00133d080) (0xc0019d6500) Stream removed, broadcasting: 1
I0805 13:48:59.756668       6 log.go:172] (0xc00133d080) Go away received
I0805 13:48:59.756873       6 log.go:172] (0xc00133d080) (0xc0019d6500) Stream removed, broadcasting: 1
I0805 13:48:59.756894       6 log.go:172] (0xc00133d080) (0xc000fbe3c0) Stream removed, broadcasting: 3
I0805 13:48:59.756903       6 log.go:172] (0xc00133d080) (0xc000fbe460) Stream removed, broadcasting: 5
Aug  5 13:48:59.756: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Aug  5 13:48:59.756: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9100 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug  5 13:48:59.757: INFO: >>> kubeConfig: /root/.kube/config
I0805 13:48:59.789593       6 log.go:172] (0xc000d74a50) (0xc00129edc0) Create stream
I0805 13:48:59.789640       6 log.go:172] (0xc000d74a50) (0xc00129edc0) Stream added, broadcasting: 1
I0805 13:48:59.792491       6 log.go:172] (0xc000d74a50) Reply frame received for 1
I0805 13:48:59.792530       6 log.go:172] (0xc000d74a50) (0xc0019d65a0) Create stream
I0805 13:48:59.792542       6 log.go:172] (0xc000d74a50) (0xc0019d65a0) Stream added, broadcasting: 3
I0805 13:48:59.793543       6 log.go:172] (0xc000d74a50) Reply frame received for 3
I0805 13:48:59.793593       6 log.go:172] (0xc000d74a50) (0xc0001c92c0) Create stream
I0805 13:48:59.793614       6 log.go:172] (0xc000d74a50) (0xc0001c92c0) Stream added, broadcasting: 5
I0805 13:48:59.794663       6 log.go:172] (0xc000d74a50) Reply frame received for 5
I0805 13:48:59.863244       6 log.go:172] (0xc000d74a50) Data frame received for 5
I0805 13:48:59.863297       6 log.go:172] (0xc0001c92c0) (5) Data frame handling
I0805 13:48:59.863342       6 log.go:172] (0xc000d74a50) Data frame received for 3
I0805 13:48:59.863422       6 log.go:172] (0xc0019d65a0) (3) Data frame handling
I0805 13:48:59.863468       6 log.go:172] (0xc0019d65a0) (3) Data frame sent
I0805 13:48:59.863568       6 log.go:172] (0xc000d74a50) Data frame received for 3
I0805 13:48:59.863586       6 log.go:172] (0xc0019d65a0) (3) Data frame handling
I0805 13:48:59.865557       6 log.go:172] (0xc000d74a50) Data frame received for 1
I0805 13:48:59.865588       6 log.go:172] (0xc00129edc0) (1) Data frame handling
I0805 13:48:59.865608       6 log.go:172] (0xc00129edc0) (1) Data frame sent
I0805 13:48:59.865631       6 log.go:172] (0xc000d74a50) (0xc00129edc0) Stream removed, broadcasting: 1
I0805 13:48:59.865649       6 log.go:172] (0xc000d74a50) Go away received
I0805 13:48:59.865842       6 log.go:172] (0xc000d74a50) (0xc00129edc0) Stream removed, broadcasting: 1
I0805 13:48:59.865875       6 log.go:172] (0xc000d74a50) (0xc0019d65a0) Stream removed, broadcasting: 3
I0805 13:48:59.865912       6 log.go:172] (0xc000d74a50) (0xc0001c92c0) Stream removed, broadcasting: 5
Aug  5 13:48:59.865: INFO: Exec stderr: ""
Aug  5 13:48:59.865: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9100 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug  5 13:48:59.865: INFO: >>> kubeConfig: /root/.kube/config
I0805 13:48:59.900112       6 log.go:172] (0xc000d75290) (0xc00129f360) Create stream
I0805 13:48:59.900147       6 log.go:172] (0xc000d75290) (0xc00129f360) Stream added, broadcasting: 1
I0805 13:48:59.903322       6 log.go:172] (0xc000d75290) Reply frame received for 1
I0805 13:48:59.903356       6 log.go:172] (0xc000d75290) (0xc0025f2f00) Create stream
I0805 13:48:59.903368       6 log.go:172] (0xc000d75290) (0xc0025f2f00) Stream added, broadcasting: 3
I0805 13:48:59.904142       6 log.go:172] (0xc000d75290) Reply frame received for 3
I0805 13:48:59.904184       6 log.go:172] (0xc000d75290) (0xc0001c9400) Create stream
I0805 13:48:59.904202       6 log.go:172] (0xc000d75290) (0xc0001c9400) Stream added, broadcasting: 5
I0805 13:48:59.905154       6 log.go:172] (0xc000d75290) Reply frame received for 5
I0805 13:48:59.983955       6 log.go:172] (0xc000d75290) Data frame received for 5
I0805 13:48:59.983994       6 log.go:172] (0xc0001c9400) (5) Data frame handling
I0805 13:48:59.984014       6 log.go:172] (0xc000d75290) Data frame received for 3
I0805 13:48:59.984028       6 log.go:172] (0xc0025f2f00) (3) Data frame handling
I0805 13:48:59.984041       6 log.go:172] (0xc0025f2f00) (3) Data frame sent
I0805 13:48:59.984050       6 log.go:172] (0xc000d75290) Data frame received for 3
I0805 13:48:59.984064       6 log.go:172] (0xc0025f2f00) (3) Data frame handling
I0805 13:48:59.985817       6 log.go:172] (0xc000d75290) Data frame received for 1
I0805 13:48:59.985853       6 log.go:172] (0xc00129f360) (1) Data frame handling
I0805 13:48:59.985872       6 log.go:172] (0xc00129f360) (1) Data frame sent
I0805 13:48:59.985897       6 log.go:172] (0xc000d75290) (0xc00129f360) Stream removed, broadcasting: 1
I0805 13:48:59.985930       6 log.go:172] (0xc000d75290) Go away received
I0805 13:48:59.986120       6 log.go:172] (0xc000d75290) (0xc00129f360) Stream removed, broadcasting: 1
I0805 13:48:59.986138       6 log.go:172] (0xc000d75290) (0xc0025f2f00) Stream removed, broadcasting: 3
I0805 13:48:59.986149       6 log.go:172] (0xc000d75290) (0xc0001c9400) Stream removed, broadcasting: 5
Aug  5 13:48:59.986: INFO: Exec stderr: ""
Aug  5 13:48:59.986: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9100 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug  5 13:48:59.986: INFO: >>> kubeConfig: /root/.kube/config
I0805 13:49:00.022198       6 log.go:172] (0xc001f8f6b0) (0xc0025f3220) Create stream
I0805 13:49:00.022244       6 log.go:172] (0xc001f8f6b0) (0xc0025f3220) Stream added, broadcasting: 1
I0805 13:49:00.025355       6 log.go:172] (0xc001f8f6b0) Reply frame received for 1
I0805 13:49:00.025442       6 log.go:172] (0xc001f8f6b0) (0xc00129f4a0) Create stream
I0805 13:49:00.025468       6 log.go:172] (0xc001f8f6b0) (0xc00129f4a0) Stream added, broadcasting: 3
I0805 13:49:00.026531       6 log.go:172] (0xc001f8f6b0) Reply frame received for 3
I0805 13:49:00.026570       6 log.go:172] (0xc001f8f6b0) (0xc0025f32c0) Create stream
I0805 13:49:00.026596       6 log.go:172] (0xc001f8f6b0) (0xc0025f32c0) Stream added, broadcasting: 5
I0805 13:49:00.027548       6 log.go:172] (0xc001f8f6b0) Reply frame received for 5
I0805 13:49:00.091086       6 log.go:172] (0xc001f8f6b0) Data frame received for 3
I0805 13:49:00.091120       6 log.go:172] (0xc00129f4a0) (3) Data frame handling
I0805 13:49:00.091133       6 log.go:172] (0xc00129f4a0) (3) Data frame sent
I0805 13:49:00.091144       6 log.go:172] (0xc001f8f6b0) Data frame received for 3
I0805 13:49:00.091168       6 log.go:172] (0xc00129f4a0) (3) Data frame handling
I0805 13:49:00.091194       6 log.go:172] (0xc001f8f6b0) Data frame received for 5
I0805 13:49:00.091204       6 log.go:172] (0xc0025f32c0) (5) Data frame handling
I0805 13:49:00.092435       6 log.go:172] (0xc001f8f6b0) Data frame received for 1
I0805 13:49:00.092475       6 log.go:172] (0xc0025f3220) (1) Data frame handling
I0805 13:49:00.092489       6 log.go:172] (0xc0025f3220) (1) Data frame sent
I0805 13:49:00.092526       6 log.go:172] (0xc001f8f6b0) (0xc0025f3220) Stream removed, broadcasting: 1
I0805 13:49:00.092551       6 log.go:172] (0xc001f8f6b0) Go away received
I0805 13:49:00.092656       6 log.go:172] (0xc001f8f6b0) (0xc0025f3220) Stream removed, broadcasting: 1
I0805 13:49:00.092687       6 log.go:172] (0xc001f8f6b0) (0xc00129f4a0) Stream removed, broadcasting: 3
I0805 13:49:00.092716       6 log.go:172] (0xc001f8f6b0) (0xc0025f32c0) Stream removed, broadcasting: 5
Aug  5 13:49:00.092: INFO: Exec stderr: ""
Aug  5 13:49:00.092: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9100 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug  5 13:49:00.092: INFO: >>> kubeConfig: /root/.kube/config
I0805 13:49:00.126024       6 log.go:172] (0xc002850160) (0xc0001c99a0) Create stream
I0805 13:49:00.126055       6 log.go:172] (0xc002850160) (0xc0001c99a0) Stream added, broadcasting: 1
I0805 13:49:00.128929       6 log.go:172] (0xc002850160) Reply frame received for 1
I0805 13:49:00.128975       6 log.go:172] (0xc002850160) (0xc00129f540) Create stream
I0805 13:49:00.128990       6 log.go:172] (0xc002850160) (0xc00129f540) Stream added, broadcasting: 3
I0805 13:49:00.130235       6 log.go:172] (0xc002850160) Reply frame received for 3
I0805 13:49:00.130281       6 log.go:172] (0xc002850160) (0xc0019d6640) Create stream
I0805 13:49:00.130304       6 log.go:172] (0xc002850160) (0xc0019d6640) Stream added, broadcasting: 5
I0805 13:49:00.131458       6 log.go:172] (0xc002850160) Reply frame received for 5
I0805 13:49:00.195143       6 log.go:172] (0xc002850160) Data frame received for 5
I0805 13:49:00.195188       6 log.go:172] (0xc0019d6640) (5) Data frame handling
I0805 13:49:00.195210       6 log.go:172] (0xc002850160) Data frame received for 3
I0805 13:49:00.195218       6 log.go:172] (0xc00129f540) (3) Data frame handling
I0805 13:49:00.195226       6 log.go:172] (0xc00129f540) (3) Data frame sent
I0805 13:49:00.195234       6 log.go:172] (0xc002850160) Data frame received for 3
I0805 13:49:00.195247       6 log.go:172] (0xc00129f540) (3) Data frame handling
I0805 13:49:00.197083       6 log.go:172] (0xc002850160) Data frame received for 1
I0805 13:49:00.197119       6 log.go:172] (0xc0001c99a0) (1) Data frame handling
I0805 13:49:00.197142       6 log.go:172] (0xc0001c99a0) (1) Data frame sent
I0805 13:49:00.197178       6 log.go:172] (0xc002850160) (0xc0001c99a0) Stream removed, broadcasting: 1
I0805 13:49:00.197218       6 log.go:172] (0xc002850160) Go away received
I0805 13:49:00.197354       6 log.go:172] (0xc002850160) (0xc0001c99a0) Stream removed, broadcasting: 1
I0805 13:49:00.197379       6 log.go:172] (0xc002850160) (0xc00129f540) Stream removed, broadcasting: 3
I0805 13:49:00.197391       6 log.go:172] (0xc002850160) (0xc0019d6640) Stream removed, broadcasting: 5
Aug  5 13:49:00.197: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 13:49:00.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-9100" for this suite.
Aug  5 13:49:50.221: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:49:50.327: INFO: namespace e2e-kubelet-etc-hosts-9100 deletion completed in 50.12090666s

• [SLOW TEST:61.363 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 13:49:50.327: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-f9c584f8-ffff-4ef5-b880-8888ce7678cc
STEP: Creating a pod to test consume secrets
Aug  5 13:49:50.425: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-58d374fc-af99-4d5f-bc3d-5cee083bc8a1" in namespace "projected-5221" to be "success or failure"
Aug  5 13:49:50.440: INFO: Pod "pod-projected-secrets-58d374fc-af99-4d5f-bc3d-5cee083bc8a1": Phase="Pending", Reason="", readiness=false. Elapsed: 15.398722ms
Aug  5 13:49:52.470: INFO: Pod "pod-projected-secrets-58d374fc-af99-4d5f-bc3d-5cee083bc8a1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045105431s
Aug  5 13:49:54.473: INFO: Pod "pod-projected-secrets-58d374fc-af99-4d5f-bc3d-5cee083bc8a1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048236838s
STEP: Saw pod success
Aug  5 13:49:54.473: INFO: Pod "pod-projected-secrets-58d374fc-af99-4d5f-bc3d-5cee083bc8a1" satisfied condition "success or failure"
Aug  5 13:49:54.475: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-58d374fc-af99-4d5f-bc3d-5cee083bc8a1 container projected-secret-volume-test: 
STEP: delete the pod
Aug  5 13:49:54.495: INFO: Waiting for pod pod-projected-secrets-58d374fc-af99-4d5f-bc3d-5cee083bc8a1 to disappear
Aug  5 13:49:54.500: INFO: Pod pod-projected-secrets-58d374fc-af99-4d5f-bc3d-5cee083bc8a1 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 13:49:54.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5221" for this suite.
Aug  5 13:50:00.514: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:50:00.601: INFO: namespace projected-5221 deletion completed in 6.096995802s

• [SLOW TEST:10.274 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 13:50:00.601: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug  5 13:50:00.665: INFO: Waiting up to 5m0s for pod "downwardapi-volume-eb3757b5-e871-4f11-a517-445905c24cb4" in namespace "downward-api-597" to be "success or failure"
Aug  5 13:50:00.668: INFO: Pod "downwardapi-volume-eb3757b5-e871-4f11-a517-445905c24cb4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.127004ms
Aug  5 13:50:02.758: INFO: Pod "downwardapi-volume-eb3757b5-e871-4f11-a517-445905c24cb4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.092952771s
Aug  5 13:50:04.761: INFO: Pod "downwardapi-volume-eb3757b5-e871-4f11-a517-445905c24cb4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.096629913s
STEP: Saw pod success
Aug  5 13:50:04.761: INFO: Pod "downwardapi-volume-eb3757b5-e871-4f11-a517-445905c24cb4" satisfied condition "success or failure"
Aug  5 13:50:04.763: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-eb3757b5-e871-4f11-a517-445905c24cb4 container client-container: 
STEP: delete the pod
Aug  5 13:50:05.137: INFO: Waiting for pod downwardapi-volume-eb3757b5-e871-4f11-a517-445905c24cb4 to disappear
Aug  5 13:50:05.145: INFO: Pod downwardapi-volume-eb3757b5-e871-4f11-a517-445905c24cb4 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 13:50:05.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-597" for this suite.
Aug  5 13:50:11.160: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:50:11.229: INFO: namespace downward-api-597 deletion completed in 6.081238601s

• [SLOW TEST:10.628 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 13:50:11.229: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug  5 13:50:11.267: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0ab6cd7c-3cc6-4c8a-a58b-c01d70ed860e" in namespace "projected-7950" to be "success or failure"
Aug  5 13:50:11.302: INFO: Pod "downwardapi-volume-0ab6cd7c-3cc6-4c8a-a58b-c01d70ed860e": Phase="Pending", Reason="", readiness=false. Elapsed: 34.417236ms
Aug  5 13:50:13.503: INFO: Pod "downwardapi-volume-0ab6cd7c-3cc6-4c8a-a58b-c01d70ed860e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.235344668s
Aug  5 13:50:15.506: INFO: Pod "downwardapi-volume-0ab6cd7c-3cc6-4c8a-a58b-c01d70ed860e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.238348573s
Aug  5 13:50:17.547: INFO: Pod "downwardapi-volume-0ab6cd7c-3cc6-4c8a-a58b-c01d70ed860e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.279577895s
STEP: Saw pod success
Aug  5 13:50:17.547: INFO: Pod "downwardapi-volume-0ab6cd7c-3cc6-4c8a-a58b-c01d70ed860e" satisfied condition "success or failure"
Aug  5 13:50:17.549: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-0ab6cd7c-3cc6-4c8a-a58b-c01d70ed860e container client-container: 
STEP: delete the pod
Aug  5 13:50:17.789: INFO: Waiting for pod downwardapi-volume-0ab6cd7c-3cc6-4c8a-a58b-c01d70ed860e to disappear
Aug  5 13:50:17.798: INFO: Pod downwardapi-volume-0ab6cd7c-3cc6-4c8a-a58b-c01d70ed860e no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 13:50:17.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7950" for this suite.
Aug  5 13:50:23.830: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:50:23.881: INFO: namespace projected-7950 deletion completed in 6.064085424s

• [SLOW TEST:12.652 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 13:50:23.881: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating api versions
Aug  5 13:50:23.957: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Aug  5 13:50:24.176: INFO: stderr: ""
Aug  5 13:50:24.176: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 13:50:24.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2301" for this suite.
Aug  5 13:50:30.260: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:50:31.396: INFO: namespace kubectl-2301 deletion completed in 7.216103576s

• [SLOW TEST:7.514 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl api-versions
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if v1 is in available api versions  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 13:50:31.396: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
Aug  5 13:50:31.821: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Registering the sample API server.
Aug  5 13:50:33.887: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Aug  5 13:50:37.212: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732232233, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732232233, loc:(*time.Location)(0x7eb18c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732232233, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732232233, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug  5 13:50:39.216: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732232233, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732232233, loc:(*time.Location)(0x7eb18c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732232233, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732232233, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug  5 13:50:41.227: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732232233, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732232233, loc:(*time.Location)(0x7eb18c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732232233, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732232233, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug  5 13:50:43.815: INFO: Waited 589.543147ms for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 13:50:44.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-3816" for this suite.
Aug  5 13:50:50.379: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:50:50.483: INFO: namespace aggregator-3816 deletion completed in 6.230675932s

• [SLOW TEST:19.087 seconds]
[sig-api-machinery] Aggregator
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 13:50:50.484: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292
STEP: creating an rc
Aug  5 13:50:50.551: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2611'
Aug  5 13:50:53.410: INFO: stderr: ""
Aug  5 13:50:53.410: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Waiting for Redis master to start.
Aug  5 13:50:54.414: INFO: Selector matched 1 pods for map[app:redis]
Aug  5 13:50:54.415: INFO: Found 0 / 1
Aug  5 13:50:55.925: INFO: Selector matched 1 pods for map[app:redis]
Aug  5 13:50:55.925: INFO: Found 0 / 1
Aug  5 13:50:56.416: INFO: Selector matched 1 pods for map[app:redis]
Aug  5 13:50:56.416: INFO: Found 0 / 1
Aug  5 13:50:57.414: INFO: Selector matched 1 pods for map[app:redis]
Aug  5 13:50:57.414: INFO: Found 0 / 1
Aug  5 13:50:58.414: INFO: Selector matched 1 pods for map[app:redis]
Aug  5 13:50:58.414: INFO: Found 1 / 1
Aug  5 13:50:58.414: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Aug  5 13:50:58.416: INFO: Selector matched 1 pods for map[app:redis]
Aug  5 13:50:58.416: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Aug  5 13:50:58.416: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-76nzb redis-master --namespace=kubectl-2611'
Aug  5 13:50:58.523: INFO: stderr: ""
Aug  5 13:50:58.523: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 05 Aug 13:50:57.582 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 05 Aug 13:50:57.582 # Server started, Redis version 3.2.12\n1:M 05 Aug 13:50:57.582 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 05 Aug 13:50:57.583 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Aug  5 13:50:58.523: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-76nzb redis-master --namespace=kubectl-2611 --tail=1'
Aug  5 13:50:58.631: INFO: stderr: ""
Aug  5 13:50:58.631: INFO: stdout: "1:M 05 Aug 13:50:57.583 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Aug  5 13:50:58.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-76nzb redis-master --namespace=kubectl-2611 --limit-bytes=1'
Aug  5 13:50:58.730: INFO: stderr: ""
Aug  5 13:50:58.730: INFO: stdout: " "
STEP: exposing timestamps
Aug  5 13:50:58.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-76nzb redis-master --namespace=kubectl-2611 --tail=1 --timestamps'
Aug  5 13:50:58.826: INFO: stderr: ""
Aug  5 13:50:58.826: INFO: stdout: "2020-08-05T13:50:57.583085602Z 1:M 05 Aug 13:50:57.583 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Aug  5 13:51:01.327: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-76nzb redis-master --namespace=kubectl-2611 --since=1s'
Aug  5 13:51:01.438: INFO: stderr: ""
Aug  5 13:51:01.438: INFO: stdout: ""
Aug  5 13:51:01.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-76nzb redis-master --namespace=kubectl-2611 --since=24h'
Aug  5 13:51:01.553: INFO: stderr: ""
Aug  5 13:51:01.553: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 05 Aug 13:50:57.582 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 05 Aug 13:50:57.582 # Server started, Redis version 3.2.12\n1:M 05 Aug 13:50:57.582 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 05 Aug 13:50:57.583 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
STEP: using delete to clean up resources
Aug  5 13:51:01.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2611'
Aug  5 13:51:01.641: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug  5 13:51:01.641: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Aug  5 13:51:01.641: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-2611'
Aug  5 13:51:01.748: INFO: stderr: "No resources found.\n"
Aug  5 13:51:01.748: INFO: stdout: ""
Aug  5 13:51:01.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-2611 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Aug  5 13:51:01.830: INFO: stderr: ""
Aug  5 13:51:01.830: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 13:51:01.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2611" for this suite.
Aug  5 13:51:23.874: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:51:23.939: INFO: namespace kubectl-2611 deletion completed in 22.105794313s

• [SLOW TEST:33.455 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 13:51:23.939: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug  5 13:51:24.005: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2b411851-d655-4b4c-ab18-b26afda188e4" in namespace "projected-4177" to be "success or failure"
Aug  5 13:51:24.009: INFO: Pod "downwardapi-volume-2b411851-d655-4b4c-ab18-b26afda188e4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.172832ms
Aug  5 13:51:26.012: INFO: Pod "downwardapi-volume-2b411851-d655-4b4c-ab18-b26afda188e4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007095576s
Aug  5 13:51:28.015: INFO: Pod "downwardapi-volume-2b411851-d655-4b4c-ab18-b26afda188e4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010667987s
Aug  5 13:51:30.020: INFO: Pod "downwardapi-volume-2b411851-d655-4b4c-ab18-b26afda188e4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014955119s
Aug  5 13:51:32.151: INFO: Pod "downwardapi-volume-2b411851-d655-4b4c-ab18-b26afda188e4": Phase="Running", Reason="", readiness=true. Elapsed: 8.145935636s
Aug  5 13:51:34.154: INFO: Pod "downwardapi-volume-2b411851-d655-4b4c-ab18-b26afda188e4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.14978444s
STEP: Saw pod success
Aug  5 13:51:34.154: INFO: Pod "downwardapi-volume-2b411851-d655-4b4c-ab18-b26afda188e4" satisfied condition "success or failure"
Aug  5 13:51:34.157: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-2b411851-d655-4b4c-ab18-b26afda188e4 container client-container: 
STEP: delete the pod
Aug  5 13:51:34.181: INFO: Waiting for pod downwardapi-volume-2b411851-d655-4b4c-ab18-b26afda188e4 to disappear
Aug  5 13:51:34.202: INFO: Pod downwardapi-volume-2b411851-d655-4b4c-ab18-b26afda188e4 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 13:51:34.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4177" for this suite.
Aug  5 13:51:40.218: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:51:40.274: INFO: namespace projected-4177 deletion completed in 6.068155626s

• [SLOW TEST:16.335 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 13:51:40.274: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug  5 13:51:40.335: INFO: Pod name rollover-pod: Found 0 pods out of 1
Aug  5 13:51:45.345: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Aug  5 13:51:49.350: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Aug  5 13:51:51.353: INFO: Creating deployment "test-rollover-deployment"
Aug  5 13:51:51.362: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Aug  5 13:51:53.367: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Aug  5 13:51:53.375: INFO: Ensure that both replica sets have 1 created replica
Aug  5 13:51:53.378: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Aug  5 13:51:53.384: INFO: Updating deployment test-rollover-deployment
Aug  5 13:51:53.384: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Aug  5 13:51:55.740: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Aug  5 13:51:55.744: INFO: Make sure deployment "test-rollover-deployment" is complete
Aug  5 13:51:55.749: INFO: all replica sets need to contain the pod-template-hash label
Aug  5 13:51:55.749: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732232311, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732232311, loc:(*time.Location)(0x7eb18c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732232313, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732232311, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug  5 13:51:57.797: INFO: all replica sets need to contain the pod-template-hash label
Aug  5 13:51:57.797: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732232311, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732232311, loc:(*time.Location)(0x7eb18c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732232313, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732232311, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug  5 13:51:59.756: INFO: all replica sets need to contain the pod-template-hash label
Aug  5 13:51:59.756: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732232311, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732232311, loc:(*time.Location)(0x7eb18c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732232318, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732232311, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug  5 13:52:01.756: INFO: all replica sets need to contain the pod-template-hash label
Aug  5 13:52:01.756: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732232311, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732232311, loc:(*time.Location)(0x7eb18c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732232318, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732232311, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug  5 13:52:03.755: INFO: all replica sets need to contain the pod-template-hash label
Aug  5 13:52:03.756: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732232311, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732232311, loc:(*time.Location)(0x7eb18c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732232318, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732232311, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug  5 13:52:05.756: INFO: all replica sets need to contain the pod-template-hash label
Aug  5 13:52:05.756: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732232311, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732232311, loc:(*time.Location)(0x7eb18c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732232318, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732232311, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug  5 13:52:07.755: INFO: all replica sets need to contain the pod-template-hash label
Aug  5 13:52:07.755: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732232311, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732232311, loc:(*time.Location)(0x7eb18c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732232318, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732232311, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug  5 13:52:09.755: INFO: 
Aug  5 13:52:09.755: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Aug  5 13:52:09.762: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-4614,SelfLink:/apis/apps/v1/namespaces/deployment-4614/deployments/test-rollover-deployment,UID:a00361ce-463c-4ba8-9b4b-b2f20e8501ce,ResourceVersion:3100513,Generation:2,CreationTimestamp:2020-08-05 13:51:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-08-05 13:51:51 +0000 UTC 2020-08-05 13:51:51 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-08-05 13:52:08 +0000 UTC 2020-08-05 13:51:51 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Aug  5 13:52:09.765: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-4614,SelfLink:/apis/apps/v1/namespaces/deployment-4614/replicasets/test-rollover-deployment-854595fc44,UID:cb0affcf-3a7e-45c8-9eee-e2490e40c028,ResourceVersion:3100502,Generation:2,CreationTimestamp:2020-08-05 13:51:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment a00361ce-463c-4ba8-9b4b-b2f20e8501ce 0xc003109e37 0xc003109e38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Aug  5 13:52:09.765: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Aug  5 13:52:09.765: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-4614,SelfLink:/apis/apps/v1/namespaces/deployment-4614/replicasets/test-rollover-controller,UID:a3584a6b-c785-43d7-966f-5d137d7684ef,ResourceVersion:3100511,Generation:2,CreationTimestamp:2020-08-05 13:51:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment a00361ce-463c-4ba8-9b4b-b2f20e8501ce 0xc003109d4f 0xc003109d60}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Aug  5 13:52:09.765: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-4614,SelfLink:/apis/apps/v1/namespaces/deployment-4614/replicasets/test-rollover-deployment-9b8b997cf,UID:49c3822f-f64f-4b73-93eb-587d9e88b205,ResourceVersion:3100462,Generation:2,CreationTimestamp:2020-08-05 13:51:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment a00361ce-463c-4ba8-9b4b-b2f20e8501ce 0xc003109f00 0xc003109f01}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Aug  5 13:52:09.768: INFO: Pod "test-rollover-deployment-854595fc44-nkqg6" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-nkqg6,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-4614,SelfLink:/api/v1/namespaces/deployment-4614/pods/test-rollover-deployment-854595fc44-nkqg6,UID:482aff3b-2ab0-4343-924d-7033e819d916,ResourceVersion:3100480,Generation:0,CreationTimestamp:2020-08-05 13:51:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 cb0affcf-3a7e-45c8-9eee-e2490e40c028 0xc002fc3557 0xc002fc3558}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4lxrd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4lxrd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-4lxrd true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002fc35d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002fc35f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 13:51:53 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 13:51:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 13:51:58 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 13:51:53 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.7,PodIP:10.244.2.138,StartTime:2020-08-05 13:51:53 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-08-05 13:51:57 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://323a01f66968c6e1b5ea3e935243b44d0be3dcfb51b5db9e18444d219c5a9b3b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 13:52:09.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-4614" for this suite.
Aug  5 13:52:17.788: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:52:17.844: INFO: namespace deployment-4614 deletion completed in 8.072675013s

• [SLOW TEST:37.569 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 13:52:17.844: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-43646dd9-2099-4892-8160-355366167d07
STEP: Creating a pod to test consume configMaps
Aug  5 13:52:17.903: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3bd42ecb-8dc7-4113-b32f-c6801311d294" in namespace "projected-9152" to be "success or failure"
Aug  5 13:52:17.916: INFO: Pod "pod-projected-configmaps-3bd42ecb-8dc7-4113-b32f-c6801311d294": Phase="Pending", Reason="", readiness=false. Elapsed: 12.885781ms
Aug  5 13:52:19.933: INFO: Pod "pod-projected-configmaps-3bd42ecb-8dc7-4113-b32f-c6801311d294": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030111703s
Aug  5 13:52:21.936: INFO: Pod "pod-projected-configmaps-3bd42ecb-8dc7-4113-b32f-c6801311d294": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033184175s
Aug  5 13:52:23.967: INFO: Pod "pod-projected-configmaps-3bd42ecb-8dc7-4113-b32f-c6801311d294": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.064094781s
STEP: Saw pod success
Aug  5 13:52:23.967: INFO: Pod "pod-projected-configmaps-3bd42ecb-8dc7-4113-b32f-c6801311d294" satisfied condition "success or failure"
Aug  5 13:52:23.969: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-3bd42ecb-8dc7-4113-b32f-c6801311d294 container projected-configmap-volume-test: 
STEP: delete the pod
Aug  5 13:52:25.559: INFO: Waiting for pod pod-projected-configmaps-3bd42ecb-8dc7-4113-b32f-c6801311d294 to disappear
Aug  5 13:52:25.777: INFO: Pod pod-projected-configmaps-3bd42ecb-8dc7-4113-b32f-c6801311d294 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 13:52:25.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9152" for this suite.
Aug  5 13:52:31.864: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:52:31.931: INFO: namespace projected-9152 deletion completed in 6.149000189s

• [SLOW TEST:14.087 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 13:52:31.931: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on tmpfs
Aug  5 13:52:32.017: INFO: Waiting up to 5m0s for pod "pod-8d4671e1-55e5-4aa3-9b1c-ff4d442afad7" in namespace "emptydir-8498" to be "success or failure"
Aug  5 13:52:32.034: INFO: Pod "pod-8d4671e1-55e5-4aa3-9b1c-ff4d442afad7": Phase="Pending", Reason="", readiness=false. Elapsed: 16.305466ms
Aug  5 13:52:34.037: INFO: Pod "pod-8d4671e1-55e5-4aa3-9b1c-ff4d442afad7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019939133s
Aug  5 13:52:37.074: INFO: Pod "pod-8d4671e1-55e5-4aa3-9b1c-ff4d442afad7": Phase="Pending", Reason="", readiness=false. Elapsed: 5.056684454s
Aug  5 13:52:39.077: INFO: Pod "pod-8d4671e1-55e5-4aa3-9b1c-ff4d442afad7": Phase="Pending", Reason="", readiness=false. Elapsed: 7.059470803s
Aug  5 13:52:41.081: INFO: Pod "pod-8d4671e1-55e5-4aa3-9b1c-ff4d442afad7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.06319845s
STEP: Saw pod success
Aug  5 13:52:41.081: INFO: Pod "pod-8d4671e1-55e5-4aa3-9b1c-ff4d442afad7" satisfied condition "success or failure"
Aug  5 13:52:41.083: INFO: Trying to get logs from node iruya-worker2 pod pod-8d4671e1-55e5-4aa3-9b1c-ff4d442afad7 container test-container: 
STEP: delete the pod
Aug  5 13:52:41.156: INFO: Waiting for pod pod-8d4671e1-55e5-4aa3-9b1c-ff4d442afad7 to disappear
Aug  5 13:52:41.187: INFO: Pod pod-8d4671e1-55e5-4aa3-9b1c-ff4d442afad7 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 13:52:41.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8498" for this suite.
Aug  5 13:52:47.213: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:52:47.288: INFO: namespace emptydir-8498 deletion completed in 6.09863718s

• [SLOW TEST:15.356 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 13:52:47.288: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Aug  5 13:52:56.406: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 13:52:57.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-7207" for this suite.
Aug  5 13:53:19.541: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:53:19.603: INFO: namespace replicaset-7207 deletion completed in 22.181215181s

• [SLOW TEST:32.315 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 13:53:19.603: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Aug  5 13:53:19.647: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 13:53:32.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-4467" for this suite.
Aug  5 13:54:10.730: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:54:10.793: INFO: namespace init-container-4467 deletion completed in 38.079103053s

• [SLOW TEST:51.190 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 13:54:10.794: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug  5 13:54:10.885: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0d861fd1-4e68-41fb-9997-d6eb9713bb4c" in namespace "projected-7152" to be "success or failure"
Aug  5 13:54:10.886: INFO: Pod "downwardapi-volume-0d861fd1-4e68-41fb-9997-d6eb9713bb4c": Phase="Pending", Reason="", readiness=false. Elapsed: 1.217125ms
Aug  5 13:54:12.890: INFO: Pod "downwardapi-volume-0d861fd1-4e68-41fb-9997-d6eb9713bb4c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004620316s
Aug  5 13:54:14.893: INFO: Pod "downwardapi-volume-0d861fd1-4e68-41fb-9997-d6eb9713bb4c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008030636s
Aug  5 13:54:17.845: INFO: Pod "downwardapi-volume-0d861fd1-4e68-41fb-9997-d6eb9713bb4c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.959685808s
Aug  5 13:54:19.848: INFO: Pod "downwardapi-volume-0d861fd1-4e68-41fb-9997-d6eb9713bb4c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.963177564s
Aug  5 13:54:22.204: INFO: Pod "downwardapi-volume-0d861fd1-4e68-41fb-9997-d6eb9713bb4c": Phase="Pending", Reason="", readiness=false. Elapsed: 11.318485982s
Aug  5 13:54:24.207: INFO: Pod "downwardapi-volume-0d861fd1-4e68-41fb-9997-d6eb9713bb4c": Phase="Pending", Reason="", readiness=false. Elapsed: 13.321934075s
Aug  5 13:54:26.211: INFO: Pod "downwardapi-volume-0d861fd1-4e68-41fb-9997-d6eb9713bb4c": Phase="Running", Reason="", readiness=true. Elapsed: 15.325460985s
Aug  5 13:54:28.214: INFO: Pod "downwardapi-volume-0d861fd1-4e68-41fb-9997-d6eb9713bb4c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.3291436s
STEP: Saw pod success
Aug  5 13:54:28.214: INFO: Pod "downwardapi-volume-0d861fd1-4e68-41fb-9997-d6eb9713bb4c" satisfied condition "success or failure"
Aug  5 13:54:28.217: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-0d861fd1-4e68-41fb-9997-d6eb9713bb4c container client-container: 
STEP: delete the pod
Aug  5 13:54:28.238: INFO: Waiting for pod downwardapi-volume-0d861fd1-4e68-41fb-9997-d6eb9713bb4c to disappear
Aug  5 13:54:28.243: INFO: Pod downwardapi-volume-0d861fd1-4e68-41fb-9997-d6eb9713bb4c no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 13:54:28.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7152" for this suite.
Aug  5 13:54:34.267: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:54:34.325: INFO: namespace projected-7152 deletion completed in 6.079619202s

• [SLOW TEST:23.531 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 13:54:34.325: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-262f9ef7-2d91-4981-b1ec-4f41fa72f1cf
STEP: Creating a pod to test consume configMaps
Aug  5 13:54:34.425: INFO: Waiting up to 5m0s for pod "pod-configmaps-2c0375a9-a3bd-4b3c-9d96-fe8b688fd94c" in namespace "configmap-5222" to be "success or failure"
Aug  5 13:54:34.447: INFO: Pod "pod-configmaps-2c0375a9-a3bd-4b3c-9d96-fe8b688fd94c": Phase="Pending", Reason="", readiness=false. Elapsed: 22.107944ms
Aug  5 13:54:36.450: INFO: Pod "pod-configmaps-2c0375a9-a3bd-4b3c-9d96-fe8b688fd94c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025062367s
Aug  5 13:54:38.540: INFO: Pod "pod-configmaps-2c0375a9-a3bd-4b3c-9d96-fe8b688fd94c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.114923334s
Aug  5 13:54:40.543: INFO: Pod "pod-configmaps-2c0375a9-a3bd-4b3c-9d96-fe8b688fd94c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.118330296s
STEP: Saw pod success
Aug  5 13:54:40.543: INFO: Pod "pod-configmaps-2c0375a9-a3bd-4b3c-9d96-fe8b688fd94c" satisfied condition "success or failure"
Aug  5 13:54:40.545: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-2c0375a9-a3bd-4b3c-9d96-fe8b688fd94c container configmap-volume-test: 
STEP: delete the pod
Aug  5 13:54:40.648: INFO: Waiting for pod pod-configmaps-2c0375a9-a3bd-4b3c-9d96-fe8b688fd94c to disappear
Aug  5 13:54:40.663: INFO: Pod pod-configmaps-2c0375a9-a3bd-4b3c-9d96-fe8b688fd94c no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 13:54:40.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5222" for this suite.
Aug  5 13:54:46.671: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:54:46.730: INFO: namespace configmap-5222 deletion completed in 6.065040303s

• [SLOW TEST:12.405 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 13:54:46.730: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-a3493583-582d-466c-a168-ef259b343ffb
STEP: Creating configMap with name cm-test-opt-upd-ac2d2455-a850-40d3-b335-dab9f63af7a4
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-a3493583-582d-466c-a168-ef259b343ffb
STEP: Updating configmap cm-test-opt-upd-ac2d2455-a850-40d3-b335-dab9f63af7a4
STEP: Creating configMap with name cm-test-opt-create-04f84ab3-b61e-4467-a369-34a5147ceae3
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 13:55:00.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4633" for this suite.
Aug  5 13:55:40.925: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:55:40.983: INFO: namespace projected-4633 deletion completed in 40.06737996s

• [SLOW TEST:54.252 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 13:55:40.983: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Aug  5 13:56:01.114: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug  5 13:56:01.121: INFO: Pod pod-with-poststart-http-hook still exists
Aug  5 13:56:03.122: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug  5 13:56:03.127: INFO: Pod pod-with-poststart-http-hook still exists
Aug  5 13:56:05.122: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug  5 13:56:05.125: INFO: Pod pod-with-poststart-http-hook still exists
Aug  5 13:56:07.122: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug  5 13:56:07.125: INFO: Pod pod-with-poststart-http-hook still exists
Aug  5 13:56:09.122: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug  5 13:56:09.125: INFO: Pod pod-with-poststart-http-hook still exists
Aug  5 13:56:11.122: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug  5 13:56:11.126: INFO: Pod pod-with-poststart-http-hook still exists
Aug  5 13:56:13.122: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug  5 13:56:13.126: INFO: Pod pod-with-poststart-http-hook still exists
Aug  5 13:56:15.122: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug  5 13:56:15.126: INFO: Pod pod-with-poststart-http-hook still exists
Aug  5 13:56:17.122: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug  5 13:56:17.126: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 13:56:17.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-1065" for this suite.
Aug  5 13:56:39.142: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:56:39.218: INFO: namespace container-lifecycle-hook-1065 deletion completed in 22.088203682s

• [SLOW TEST:58.235 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 13:56:39.218: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Aug  5 13:56:39.259: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-1703'
Aug  5 13:56:39.374: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Aug  5 13:56:39.374: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617
Aug  5 13:56:39.377: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-1703'
Aug  5 13:56:39.480: INFO: stderr: ""
Aug  5 13:56:39.480: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 13:56:39.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1703" for this suite.
Aug  5 13:56:45.495: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:56:45.568: INFO: namespace kubectl-1703 deletion completed in 6.084724807s

• [SLOW TEST:6.350 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image when restart is OnFailure  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 13:56:45.569: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with configMap that has name projected-configmap-test-upd-7d3baaed-6023-4681-8699-b24673fbde43
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-7d3baaed-6023-4681-8699-b24673fbde43
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 13:56:51.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8997" for this suite.
Aug  5 13:57:13.796: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:57:13.875: INFO: namespace projected-8997 deletion completed in 22.095352414s

• [SLOW TEST:28.306 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 13:57:13.876: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Aug  5 13:57:17.977: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-d8feb52e-fde3-4cb9-93e4-62abd83a02a3,GenerateName:,Namespace:events-2131,SelfLink:/api/v1/namespaces/events-2131/pods/send-events-d8feb52e-fde3-4cb9-93e4-62abd83a02a3,UID:e9235708-f1af-431d-877e-af3384cbdd78,ResourceVersion:3101470,Generation:0,CreationTimestamp:2020-08-05 13:57:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 913734879,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qgbf9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qgbf9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-qgbf9 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002fc2260} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002fc2280}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 13:57:13 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 13:57:17 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 13:57:17 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 13:57:13 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.1.114,StartTime:2020-08-05 13:57:13 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-08-05 13:57:16 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://24b7a7efb4d542cef5e8ebb8cd6303381f836c44a3a7fdd482a9208d79cba326}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Aug  5 13:57:19.983: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Aug  5 13:57:21.988: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 13:57:21.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-2131" for this suite.
Aug  5 13:58:08.048: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:58:08.125: INFO: namespace events-2131 deletion completed in 46.100507207s

• [SLOW TEST:54.249 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 13:58:08.125: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug  5 13:58:08.251: INFO: Waiting up to 5m0s for pod "downwardapi-volume-92a34f39-e097-4ee8-83cf-bba2b1b8fef0" in namespace "projected-6823" to be "success or failure"
Aug  5 13:58:08.269: INFO: Pod "downwardapi-volume-92a34f39-e097-4ee8-83cf-bba2b1b8fef0": Phase="Pending", Reason="", readiness=false. Elapsed: 18.209006ms
Aug  5 13:58:10.273: INFO: Pod "downwardapi-volume-92a34f39-e097-4ee8-83cf-bba2b1b8fef0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022813879s
Aug  5 13:58:12.278: INFO: Pod "downwardapi-volume-92a34f39-e097-4ee8-83cf-bba2b1b8fef0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026946843s
STEP: Saw pod success
Aug  5 13:58:12.278: INFO: Pod "downwardapi-volume-92a34f39-e097-4ee8-83cf-bba2b1b8fef0" satisfied condition "success or failure"
Aug  5 13:58:12.281: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-92a34f39-e097-4ee8-83cf-bba2b1b8fef0 container client-container: 
STEP: delete the pod
Aug  5 13:58:12.480: INFO: Waiting for pod downwardapi-volume-92a34f39-e097-4ee8-83cf-bba2b1b8fef0 to disappear
Aug  5 13:58:12.530: INFO: Pod downwardapi-volume-92a34f39-e097-4ee8-83cf-bba2b1b8fef0 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 13:58:12.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6823" for this suite.
Aug  5 13:58:18.614: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:58:18.693: INFO: namespace projected-6823 deletion completed in 6.131382874s

• [SLOW TEST:10.568 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 13:58:18.693: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Aug  5 13:58:23.363: INFO: Successfully updated pod "labelsupdate51a484f7-4667-44d5-a65c-1336baf971c5"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 13:58:27.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4226" for this suite.
Aug  5 13:58:49.444: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:58:49.522: INFO: namespace downward-api-4226 deletion completed in 22.09799351s

• [SLOW TEST:30.829 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 13:58:49.523: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 13:58:49.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-932" for this suite.
Aug  5 13:58:55.618: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:58:55.693: INFO: namespace services-932 deletion completed in 6.092901258s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:6.170 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 13:58:55.693: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug  5 13:58:55.754: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3dffdff7-74b2-468f-bd3a-667351e115f3" in namespace "downward-api-4801" to be "success or failure"
Aug  5 13:58:55.758: INFO: Pod "downwardapi-volume-3dffdff7-74b2-468f-bd3a-667351e115f3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.971816ms
Aug  5 13:58:57.762: INFO: Pod "downwardapi-volume-3dffdff7-74b2-468f-bd3a-667351e115f3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007917381s
Aug  5 13:58:59.766: INFO: Pod "downwardapi-volume-3dffdff7-74b2-468f-bd3a-667351e115f3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01196979s
STEP: Saw pod success
Aug  5 13:58:59.766: INFO: Pod "downwardapi-volume-3dffdff7-74b2-468f-bd3a-667351e115f3" satisfied condition "success or failure"
Aug  5 13:58:59.768: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-3dffdff7-74b2-468f-bd3a-667351e115f3 container client-container: 
STEP: delete the pod
Aug  5 13:58:59.783: INFO: Waiting for pod downwardapi-volume-3dffdff7-74b2-468f-bd3a-667351e115f3 to disappear
Aug  5 13:58:59.805: INFO: Pod downwardapi-volume-3dffdff7-74b2-468f-bd3a-667351e115f3 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 13:58:59.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4801" for this suite.
Aug  5 13:59:05.834: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:59:05.902: INFO: namespace downward-api-4801 deletion completed in 6.094644886s

• [SLOW TEST:10.209 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 13:59:05.903: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating all guestbook components
Aug  5 13:59:05.959: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Aug  5 13:59:05.959: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5748'
Aug  5 13:59:06.313: INFO: stderr: ""
Aug  5 13:59:06.313: INFO: stdout: "service/redis-slave created\n"
Aug  5 13:59:06.313: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Aug  5 13:59:06.313: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5748'
Aug  5 13:59:06.586: INFO: stderr: ""
Aug  5 13:59:06.586: INFO: stdout: "service/redis-master created\n"
Aug  5 13:59:06.586: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Aug  5 13:59:06.586: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5748'
Aug  5 13:59:06.873: INFO: stderr: ""
Aug  5 13:59:06.873: INFO: stdout: "service/frontend created\n"
Aug  5 13:59:06.873: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Aug  5 13:59:06.874: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5748'
Aug  5 13:59:07.129: INFO: stderr: ""
Aug  5 13:59:07.130: INFO: stdout: "deployment.apps/frontend created\n"
Aug  5 13:59:07.130: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Aug  5 13:59:07.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5748'
Aug  5 13:59:07.401: INFO: stderr: ""
Aug  5 13:59:07.401: INFO: stdout: "deployment.apps/redis-master created\n"
Aug  5 13:59:07.402: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: redis
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Aug  5 13:59:07.402: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5748'
Aug  5 13:59:07.761: INFO: stderr: ""
Aug  5 13:59:07.761: INFO: stdout: "deployment.apps/redis-slave created\n"
STEP: validating guestbook app
Aug  5 13:59:07.761: INFO: Waiting for all frontend pods to be Running.
Aug  5 13:59:17.812: INFO: Waiting for frontend to serve content.
Aug  5 13:59:17.828: INFO: Trying to add a new entry to the guestbook.
Aug  5 13:59:17.845: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Aug  5 13:59:17.857: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5748'
Aug  5 13:59:17.989: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug  5 13:59:17.989: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Aug  5 13:59:17.989: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5748'
Aug  5 13:59:18.127: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug  5 13:59:18.127: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Aug  5 13:59:18.128: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5748'
Aug  5 13:59:18.274: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug  5 13:59:18.274: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Aug  5 13:59:18.275: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5748'
Aug  5 13:59:18.394: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug  5 13:59:18.394: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Aug  5 13:59:18.394: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5748'
Aug  5 13:59:18.495: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug  5 13:59:18.495: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Aug  5 13:59:18.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5748'
Aug  5 13:59:18.610: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug  5 13:59:18.610: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 13:59:18.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5748" for this suite.
Aug  5 13:59:56.660: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 13:59:56.744: INFO: namespace kubectl-5748 deletion completed in 38.127244715s

• [SLOW TEST:50.841 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 13:59:56.745: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-152db80d-fa79-4656-afa2-4aed837e3c6e
STEP: Creating a pod to test consume configMaps
Aug  5 13:59:56.846: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8ece2595-d427-4e03-a9d5-8f9af448015a" in namespace "projected-8774" to be "success or failure"
Aug  5 13:59:56.862: INFO: Pod "pod-projected-configmaps-8ece2595-d427-4e03-a9d5-8f9af448015a": Phase="Pending", Reason="", readiness=false. Elapsed: 15.064798ms
Aug  5 13:59:58.866: INFO: Pod "pod-projected-configmaps-8ece2595-d427-4e03-a9d5-8f9af448015a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019464166s
Aug  5 14:00:00.870: INFO: Pod "pod-projected-configmaps-8ece2595-d427-4e03-a9d5-8f9af448015a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023351165s
STEP: Saw pod success
Aug  5 14:00:00.870: INFO: Pod "pod-projected-configmaps-8ece2595-d427-4e03-a9d5-8f9af448015a" satisfied condition "success or failure"
Aug  5 14:00:00.873: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-8ece2595-d427-4e03-a9d5-8f9af448015a container projected-configmap-volume-test: 
STEP: delete the pod
Aug  5 14:00:00.905: INFO: Waiting for pod pod-projected-configmaps-8ece2595-d427-4e03-a9d5-8f9af448015a to disappear
Aug  5 14:00:00.933: INFO: Pod pod-projected-configmaps-8ece2595-d427-4e03-a9d5-8f9af448015a no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 14:00:00.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8774" for this suite.
Aug  5 14:00:06.950: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 14:00:07.025: INFO: namespace projected-8774 deletion completed in 6.088919991s

• [SLOW TEST:10.281 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 14:00:07.026: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-b578e7a7-3aae-45ee-ab71-7c7336bc87d5
STEP: Creating a pod to test consume secrets
Aug  5 14:00:07.086: INFO: Waiting up to 5m0s for pod "pod-secrets-49952792-fd91-437d-9900-62874ee3442f" in namespace "secrets-8781" to be "success or failure"
Aug  5 14:00:07.095: INFO: Pod "pod-secrets-49952792-fd91-437d-9900-62874ee3442f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.970271ms
Aug  5 14:00:09.099: INFO: Pod "pod-secrets-49952792-fd91-437d-9900-62874ee3442f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013254079s
Aug  5 14:00:11.104: INFO: Pod "pod-secrets-49952792-fd91-437d-9900-62874ee3442f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017628629s
STEP: Saw pod success
Aug  5 14:00:11.104: INFO: Pod "pod-secrets-49952792-fd91-437d-9900-62874ee3442f" satisfied condition "success or failure"
Aug  5 14:00:11.107: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-49952792-fd91-437d-9900-62874ee3442f container secret-volume-test: 
STEP: delete the pod
Aug  5 14:00:11.142: INFO: Waiting for pod pod-secrets-49952792-fd91-437d-9900-62874ee3442f to disappear
Aug  5 14:00:11.163: INFO: Pod pod-secrets-49952792-fd91-437d-9900-62874ee3442f no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 14:00:11.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8781" for this suite.
Aug  5 14:00:17.202: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 14:00:17.277: INFO: namespace secrets-8781 deletion completed in 6.110094457s

• [SLOW TEST:10.251 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 14:00:17.278: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-2633.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-2633.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2633.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-2633.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-2633.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2633.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug  5 14:00:23.403: INFO: DNS probes using dns-2633/dns-test-e7bb7ea7-73e9-4f88-8b94-6e7b44833939 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 14:00:23.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-2633" for this suite.
Aug  5 14:00:29.543: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 14:00:29.620: INFO: namespace dns-2633 deletion completed in 6.138586073s

• [SLOW TEST:12.341 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 14:00:29.620: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-b05f752f-ae04-4dc9-945c-30b9740a7153
STEP: Creating a pod to test consume secrets
Aug  5 14:00:29.696: INFO: Waiting up to 5m0s for pod "pod-secrets-11eacea1-e8f2-4d6d-b836-a233e47f7536" in namespace "secrets-4966" to be "success or failure"
Aug  5 14:00:29.701: INFO: Pod "pod-secrets-11eacea1-e8f2-4d6d-b836-a233e47f7536": Phase="Pending", Reason="", readiness=false. Elapsed: 5.26892ms
Aug  5 14:00:31.720: INFO: Pod "pod-secrets-11eacea1-e8f2-4d6d-b836-a233e47f7536": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023783498s
Aug  5 14:00:33.723: INFO: Pod "pod-secrets-11eacea1-e8f2-4d6d-b836-a233e47f7536": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026901422s
STEP: Saw pod success
Aug  5 14:00:33.723: INFO: Pod "pod-secrets-11eacea1-e8f2-4d6d-b836-a233e47f7536" satisfied condition "success or failure"
Aug  5 14:00:33.726: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-11eacea1-e8f2-4d6d-b836-a233e47f7536 container secret-volume-test: 
STEP: delete the pod
Aug  5 14:00:33.754: INFO: Waiting for pod pod-secrets-11eacea1-e8f2-4d6d-b836-a233e47f7536 to disappear
Aug  5 14:00:33.767: INFO: Pod pod-secrets-11eacea1-e8f2-4d6d-b836-a233e47f7536 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 14:00:33.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4966" for this suite.
Aug  5 14:00:39.802: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 14:00:39.876: INFO: namespace secrets-4966 deletion completed in 6.105170817s

• [SLOW TEST:10.256 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 14:00:39.876: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Aug  5 14:00:44.488: INFO: Successfully updated pod "annotationupdate06107afd-4ee4-4943-9a9e-aaa154742c0a"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 14:00:48.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8870" for this suite.
Aug  5 14:01:10.543: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 14:01:10.618: INFO: namespace projected-8870 deletion completed in 22.086958402s

• [SLOW TEST:30.742 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 14:01:10.619: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Aug  5 14:01:10.772: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8771,SelfLink:/api/v1/namespaces/watch-8771/configmaps/e2e-watch-test-label-changed,UID:e9ca6c6a-99cd-4b76-adcd-2e6b9b2316c7,ResourceVersion:3102347,Generation:0,CreationTimestamp:2020-08-05 14:01:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Aug  5 14:01:10.772: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8771,SelfLink:/api/v1/namespaces/watch-8771/configmaps/e2e-watch-test-label-changed,UID:e9ca6c6a-99cd-4b76-adcd-2e6b9b2316c7,ResourceVersion:3102348,Generation:0,CreationTimestamp:2020-08-05 14:01:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Aug  5 14:01:10.772: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8771,SelfLink:/api/v1/namespaces/watch-8771/configmaps/e2e-watch-test-label-changed,UID:e9ca6c6a-99cd-4b76-adcd-2e6b9b2316c7,ResourceVersion:3102349,Generation:0,CreationTimestamp:2020-08-05 14:01:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Aug  5 14:01:20.917: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8771,SelfLink:/api/v1/namespaces/watch-8771/configmaps/e2e-watch-test-label-changed,UID:e9ca6c6a-99cd-4b76-adcd-2e6b9b2316c7,ResourceVersion:3102371,Generation:0,CreationTimestamp:2020-08-05 14:01:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Aug  5 14:01:20.917: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8771,SelfLink:/api/v1/namespaces/watch-8771/configmaps/e2e-watch-test-label-changed,UID:e9ca6c6a-99cd-4b76-adcd-2e6b9b2316c7,ResourceVersion:3102374,Generation:0,CreationTimestamp:2020-08-05 14:01:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Aug  5 14:01:20.917: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8771,SelfLink:/api/v1/namespaces/watch-8771/configmaps/e2e-watch-test-label-changed,UID:e9ca6c6a-99cd-4b76-adcd-2e6b9b2316c7,ResourceVersion:3102375,Generation:0,CreationTimestamp:2020-08-05 14:01:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 14:01:20.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-8771" for this suite.
Aug  5 14:01:26.934: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 14:01:27.037: INFO: namespace watch-8771 deletion completed in 6.112656721s

• [SLOW TEST:16.419 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 14:01:27.038: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Aug  5 14:01:27.123: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 14:01:36.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7321" for this suite.
Aug  5 14:01:42.403: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 14:01:42.482: INFO: namespace pods-7321 deletion completed in 6.099102533s

• [SLOW TEST:15.444 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 14:01:42.482: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-6df31cd2-ded2-4779-95ee-b6bbe7d46f32
STEP: Creating configMap with name cm-test-opt-upd-1e6c3c6d-ec4c-458a-b392-a00c97fe70b7
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-6df31cd2-ded2-4779-95ee-b6bbe7d46f32
STEP: Updating configmap cm-test-opt-upd-1e6c3c6d-ec4c-458a-b392-a00c97fe70b7
STEP: Creating configMap with name cm-test-opt-create-44af293c-fc92-4962-83e6-96aa67a306fa
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 14:01:50.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9268" for this suite.
Aug  5 14:02:14.683: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 14:02:14.755: INFO: namespace configmap-9268 deletion completed in 24.085724642s

• [SLOW TEST:32.273 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 14:02:14.755: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Aug  5 14:02:14.817: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-3930'
Aug  5 14:02:17.637: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Aug  5 14:02:17.637: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426
Aug  5 14:02:19.689: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-3930'
Aug  5 14:02:19.836: INFO: stderr: ""
Aug  5 14:02:19.836: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 14:02:19.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3930" for this suite.
Aug  5 14:02:41.869: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 14:02:41.946: INFO: namespace kubectl-3930 deletion completed in 22.105803214s

• [SLOW TEST:27.191 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 14:02:41.946: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug  5 14:02:42.010: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Aug  5 14:02:42.027: INFO: Pod name sample-pod: Found 0 pods out of 1
Aug  5 14:02:47.032: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Aug  5 14:02:47.032: INFO: Creating deployment "test-rolling-update-deployment"
Aug  5 14:02:47.037: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Aug  5 14:02:47.049: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Aug  5 14:02:49.065: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Aug  5 14:02:49.066: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732232967, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732232967, loc:(*time.Location)(0x7eb18c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732232967, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732232967, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug  5 14:02:51.211: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Aug  5 14:02:51.219: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-5114,SelfLink:/apis/apps/v1/namespaces/deployment-5114/deployments/test-rolling-update-deployment,UID:c3da6055-eb05-46f0-9388-60d17af4ccd8,ResourceVersion:3102704,Generation:1,CreationTimestamp:2020-08-05 14:02:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-08-05 14:02:47 +0000 UTC 2020-08-05 14:02:47 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-08-05 14:02:50 +0000 UTC 2020-08-05 14:02:47 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Aug  5 14:02:51.222: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-5114,SelfLink:/apis/apps/v1/namespaces/deployment-5114/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:72a685a9-cb67-4c77-bfb6-c8bb0757f8b4,ResourceVersion:3102693,Generation:1,CreationTimestamp:2020-08-05 14:02:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment c3da6055-eb05-46f0-9388-60d17af4ccd8 0xc00210d5f7 0xc00210d5f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Aug  5 14:02:51.222: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Aug  5 14:02:51.222: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-5114,SelfLink:/apis/apps/v1/namespaces/deployment-5114/replicasets/test-rolling-update-controller,UID:4d6570d5-3f82-4108-a311-6b120b6a4f00,ResourceVersion:3102703,Generation:2,CreationTimestamp:2020-08-05 14:02:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment c3da6055-eb05-46f0-9388-60d17af4ccd8 0xc00210d317 0xc00210d318}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Aug  5 14:02:51.225: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-n9vf7" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-n9vf7,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-5114,SelfLink:/api/v1/namespaces/deployment-5114/pods/test-rolling-update-deployment-79f6b9d75c-n9vf7,UID:83b32ee7-27f0-4c71-a3c9-e2537173459b,ResourceVersion:3102692,Generation:0,CreationTimestamp:2020-08-05 14:02:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c 72a685a9-cb67-4c77-bfb6-c8bb0757f8b4 0xc0031c86f7 0xc0031c86f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-h7n54 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-h7n54,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-h7n54 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0031c8770} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0031c8790}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:02:47 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:02:50 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:02:50 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:02:47 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.1.124,StartTime:2020-08-05 14:02:47 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-08-05 14:02:49 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://8718d19a252097ff2dc314dde0f966a367ef40c988ba0aa5d4ecc38e4435ee32}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 14:02:51.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-5114" for this suite.
Aug  5 14:02:57.302: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 14:02:57.377: INFO: namespace deployment-5114 deletion completed in 6.149677581s

• [SLOW TEST:15.431 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 14:02:57.378: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-7a50c0fa-89b8-4a98-a14c-2edaf96e33b6
STEP: Creating a pod to test consume configMaps
Aug  5 14:02:57.527: INFO: Waiting up to 5m0s for pod "pod-configmaps-3deebbae-32f2-48e3-969e-42db91d84b3c" in namespace "configmap-7945" to be "success or failure"
Aug  5 14:02:57.547: INFO: Pod "pod-configmaps-3deebbae-32f2-48e3-969e-42db91d84b3c": Phase="Pending", Reason="", readiness=false. Elapsed: 20.071462ms
Aug  5 14:02:59.550: INFO: Pod "pod-configmaps-3deebbae-32f2-48e3-969e-42db91d84b3c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023618769s
Aug  5 14:03:01.554: INFO: Pod "pod-configmaps-3deebbae-32f2-48e3-969e-42db91d84b3c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027690107s
STEP: Saw pod success
Aug  5 14:03:01.554: INFO: Pod "pod-configmaps-3deebbae-32f2-48e3-969e-42db91d84b3c" satisfied condition "success or failure"
Aug  5 14:03:01.557: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-3deebbae-32f2-48e3-969e-42db91d84b3c container configmap-volume-test: 
STEP: delete the pod
Aug  5 14:03:01.579: INFO: Waiting for pod pod-configmaps-3deebbae-32f2-48e3-969e-42db91d84b3c to disappear
Aug  5 14:03:01.596: INFO: Pod pod-configmaps-3deebbae-32f2-48e3-969e-42db91d84b3c no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 14:03:01.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7945" for this suite.
Aug  5 14:03:07.651: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 14:03:07.750: INFO: namespace configmap-7945 deletion completed in 6.112854663s

• [SLOW TEST:10.372 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 14:03:07.751: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-configmap-6xz2
STEP: Creating a pod to test atomic-volume-subpath
Aug  5 14:03:07.856: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-6xz2" in namespace "subpath-6035" to be "success or failure"
Aug  5 14:03:07.867: INFO: Pod "pod-subpath-test-configmap-6xz2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.402817ms
Aug  5 14:03:09.871: INFO: Pod "pod-subpath-test-configmap-6xz2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014272454s
Aug  5 14:03:11.875: INFO: Pod "pod-subpath-test-configmap-6xz2": Phase="Running", Reason="", readiness=true. Elapsed: 4.018499014s
Aug  5 14:03:13.879: INFO: Pod "pod-subpath-test-configmap-6xz2": Phase="Running", Reason="", readiness=true. Elapsed: 6.022956406s
Aug  5 14:03:15.884: INFO: Pod "pod-subpath-test-configmap-6xz2": Phase="Running", Reason="", readiness=true. Elapsed: 8.027167076s
Aug  5 14:03:17.888: INFO: Pod "pod-subpath-test-configmap-6xz2": Phase="Running", Reason="", readiness=true. Elapsed: 10.031502755s
Aug  5 14:03:19.892: INFO: Pod "pod-subpath-test-configmap-6xz2": Phase="Running", Reason="", readiness=true. Elapsed: 12.035742567s
Aug  5 14:03:21.896: INFO: Pod "pod-subpath-test-configmap-6xz2": Phase="Running", Reason="", readiness=true. Elapsed: 14.040002649s
Aug  5 14:03:23.901: INFO: Pod "pod-subpath-test-configmap-6xz2": Phase="Running", Reason="", readiness=true. Elapsed: 16.044318776s
Aug  5 14:03:25.905: INFO: Pod "pod-subpath-test-configmap-6xz2": Phase="Running", Reason="", readiness=true. Elapsed: 18.048770686s
Aug  5 14:03:27.909: INFO: Pod "pod-subpath-test-configmap-6xz2": Phase="Running", Reason="", readiness=true. Elapsed: 20.052788207s
Aug  5 14:03:29.914: INFO: Pod "pod-subpath-test-configmap-6xz2": Phase="Running", Reason="", readiness=true. Elapsed: 22.057438949s
Aug  5 14:03:31.918: INFO: Pod "pod-subpath-test-configmap-6xz2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.061761592s
STEP: Saw pod success
Aug  5 14:03:31.918: INFO: Pod "pod-subpath-test-configmap-6xz2" satisfied condition "success or failure"
Aug  5 14:03:31.921: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-configmap-6xz2 container test-container-subpath-configmap-6xz2: 
STEP: delete the pod
Aug  5 14:03:31.964: INFO: Waiting for pod pod-subpath-test-configmap-6xz2 to disappear
Aug  5 14:03:31.973: INFO: Pod pod-subpath-test-configmap-6xz2 no longer exists
STEP: Deleting pod pod-subpath-test-configmap-6xz2
Aug  5 14:03:31.973: INFO: Deleting pod "pod-subpath-test-configmap-6xz2" in namespace "subpath-6035"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 14:03:31.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-6035" for this suite.
Aug  5 14:03:37.989: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 14:03:38.069: INFO: namespace subpath-6035 deletion completed in 6.091609822s

• [SLOW TEST:30.318 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 14:03:38.070: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug  5 14:03:38.179: INFO: Creating deployment "test-recreate-deployment"
Aug  5 14:03:38.190: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Aug  5 14:03:38.202: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
Aug  5 14:03:40.210: INFO: Waiting deployment "test-recreate-deployment" to complete
Aug  5 14:03:40.213: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732233018, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732233018, loc:(*time.Location)(0x7eb18c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732233018, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732233018, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug  5 14:03:42.216: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Aug  5 14:03:42.223: INFO: Updating deployment test-recreate-deployment
Aug  5 14:03:42.223: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Aug  5 14:03:42.550: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-207,SelfLink:/apis/apps/v1/namespaces/deployment-207/deployments/test-recreate-deployment,UID:c62956ad-d4b8-473e-b06d-054ffe62725a,ResourceVersion:3102927,Generation:2,CreationTimestamp:2020-08-05 14:03:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-08-05 14:03:42 +0000 UTC 2020-08-05 14:03:42 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-08-05 14:03:42 +0000 UTC 2020-08-05 14:03:38 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Aug  5 14:03:42.626: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-207,SelfLink:/apis/apps/v1/namespaces/deployment-207/replicasets/test-recreate-deployment-5c8c9cc69d,UID:9a81fd1f-5421-4c83-8adf-1bea26fde261,ResourceVersion:3102925,Generation:1,CreationTimestamp:2020-08-05 14:03:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment c62956ad-d4b8-473e-b06d-054ffe62725a 0xc002fe00d7 0xc002fe00d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Aug  5 14:03:42.626: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Aug  5 14:03:42.626: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-207,SelfLink:/apis/apps/v1/namespaces/deployment-207/replicasets/test-recreate-deployment-6df85df6b9,UID:afd768ac-afdf-4349-932d-9af069ad820c,ResourceVersion:3102916,Generation:2,CreationTimestamp:2020-08-05 14:03:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment c62956ad-d4b8-473e-b06d-054ffe62725a 0xc002fe01a7 0xc002fe01a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Aug  5 14:03:42.889: INFO: Pod "test-recreate-deployment-5c8c9cc69d-rhcj2" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-rhcj2,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-207,SelfLink:/api/v1/namespaces/deployment-207/pods/test-recreate-deployment-5c8c9cc69d-rhcj2,UID:66528218-d0de-4dfc-ace5-27abfe7cdaee,ResourceVersion:3102921,Generation:0,CreationTimestamp:2020-08-05 14:03:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d 9a81fd1f-5421-4c83-8adf-1bea26fde261 0xc002fe0a97 0xc002fe0a98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-d596g {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-d596g,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-d596g true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002fe0b10} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002fe0b30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:03:42 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 14:03:42.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-207" for this suite.
Aug  5 14:03:49.104: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 14:03:49.190: INFO: namespace deployment-207 deletion completed in 6.297253257s

• [SLOW TEST:11.121 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 14:03:49.191: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-9d669269-109c-4895-af11-b48ff459690b
STEP: Creating a pod to test consume configMaps
Aug  5 14:03:49.271: INFO: Waiting up to 5m0s for pod "pod-configmaps-0ea4e6e9-a4d3-4291-a08a-4f867e173388" in namespace "configmap-89" to be "success or failure"
Aug  5 14:03:49.303: INFO: Pod "pod-configmaps-0ea4e6e9-a4d3-4291-a08a-4f867e173388": Phase="Pending", Reason="", readiness=false. Elapsed: 31.799117ms
Aug  5 14:03:51.306: INFO: Pod "pod-configmaps-0ea4e6e9-a4d3-4291-a08a-4f867e173388": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035484468s
Aug  5 14:03:53.332: INFO: Pod "pod-configmaps-0ea4e6e9-a4d3-4291-a08a-4f867e173388": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.061178661s
STEP: Saw pod success
Aug  5 14:03:53.332: INFO: Pod "pod-configmaps-0ea4e6e9-a4d3-4291-a08a-4f867e173388" satisfied condition "success or failure"
Aug  5 14:03:53.335: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-0ea4e6e9-a4d3-4291-a08a-4f867e173388 container configmap-volume-test: 
STEP: delete the pod
Aug  5 14:03:53.402: INFO: Waiting for pod pod-configmaps-0ea4e6e9-a4d3-4291-a08a-4f867e173388 to disappear
Aug  5 14:03:53.425: INFO: Pod pod-configmaps-0ea4e6e9-a4d3-4291-a08a-4f867e173388 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 14:03:53.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-89" for this suite.
Aug  5 14:03:59.476: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 14:03:59.547: INFO: namespace configmap-89 deletion completed in 6.088154184s

• [SLOW TEST:10.356 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 14:03:59.548: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Aug  5 14:03:59.615: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-6703'
Aug  5 14:03:59.715: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Aug  5 14:03:59.715: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Aug  5 14:03:59.795: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-pxksz]
Aug  5 14:03:59.795: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-pxksz" in namespace "kubectl-6703" to be "running and ready"
Aug  5 14:03:59.817: INFO: Pod "e2e-test-nginx-rc-pxksz": Phase="Pending", Reason="", readiness=false. Elapsed: 21.639591ms
Aug  5 14:04:01.829: INFO: Pod "e2e-test-nginx-rc-pxksz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033664231s
Aug  5 14:04:03.833: INFO: Pod "e2e-test-nginx-rc-pxksz": Phase="Running", Reason="", readiness=true. Elapsed: 4.037650453s
Aug  5 14:04:03.833: INFO: Pod "e2e-test-nginx-rc-pxksz" satisfied condition "running and ready"
Aug  5 14:04:03.833: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-pxksz]
Aug  5 14:04:03.833: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-6703'
Aug  5 14:04:03.958: INFO: stderr: ""
Aug  5 14:04:03.958: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461
Aug  5 14:04:03.958: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-6703'
Aug  5 14:04:04.063: INFO: stderr: ""
Aug  5 14:04:04.063: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 14:04:04.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6703" for this suite.
Aug  5 14:04:26.101: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 14:04:26.168: INFO: namespace kubectl-6703 deletion completed in 22.100918693s

• [SLOW TEST:26.619 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 14:04:26.168: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Aug  5 14:04:30.328: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 14:04:30.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-510" for this suite.
Aug  5 14:04:36.363: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 14:04:36.440: INFO: namespace container-runtime-510 deletion completed in 6.08914173s

• [SLOW TEST:10.272 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 14:04:36.441: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Aug  5 14:04:36.550: INFO: Waiting up to 5m0s for pod "pod-37447f16-5926-4a45-bdc6-16c86562f78e" in namespace "emptydir-9306" to be "success or failure"
Aug  5 14:04:36.564: INFO: Pod "pod-37447f16-5926-4a45-bdc6-16c86562f78e": Phase="Pending", Reason="", readiness=false. Elapsed: 13.861961ms
Aug  5 14:04:38.568: INFO: Pod "pod-37447f16-5926-4a45-bdc6-16c86562f78e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017884176s
Aug  5 14:04:40.572: INFO: Pod "pod-37447f16-5926-4a45-bdc6-16c86562f78e": Phase="Running", Reason="", readiness=true. Elapsed: 4.022166127s
Aug  5 14:04:42.576: INFO: Pod "pod-37447f16-5926-4a45-bdc6-16c86562f78e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.026500882s
STEP: Saw pod success
Aug  5 14:04:42.576: INFO: Pod "pod-37447f16-5926-4a45-bdc6-16c86562f78e" satisfied condition "success or failure"
Aug  5 14:04:42.580: INFO: Trying to get logs from node iruya-worker2 pod pod-37447f16-5926-4a45-bdc6-16c86562f78e container test-container: 
STEP: delete the pod
Aug  5 14:04:42.601: INFO: Waiting for pod pod-37447f16-5926-4a45-bdc6-16c86562f78e to disappear
Aug  5 14:04:42.630: INFO: Pod pod-37447f16-5926-4a45-bdc6-16c86562f78e no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 14:04:42.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9306" for this suite.
Aug  5 14:04:48.674: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 14:04:48.745: INFO: namespace emptydir-9306 deletion completed in 6.111312468s

• [SLOW TEST:12.304 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 14:04:48.745: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug  5 14:04:52.916: INFO: Waiting up to 5m0s for pod "client-envvars-c51a97cf-525e-4cd9-8a1f-3dee85129e8b" in namespace "pods-2808" to be "success or failure"
Aug  5 14:04:52.974: INFO: Pod "client-envvars-c51a97cf-525e-4cd9-8a1f-3dee85129e8b": Phase="Pending", Reason="", readiness=false. Elapsed: 57.919225ms
Aug  5 14:04:54.978: INFO: Pod "client-envvars-c51a97cf-525e-4cd9-8a1f-3dee85129e8b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061913289s
Aug  5 14:04:56.982: INFO: Pod "client-envvars-c51a97cf-525e-4cd9-8a1f-3dee85129e8b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.066070752s
STEP: Saw pod success
Aug  5 14:04:56.982: INFO: Pod "client-envvars-c51a97cf-525e-4cd9-8a1f-3dee85129e8b" satisfied condition "success or failure"
Aug  5 14:04:56.985: INFO: Trying to get logs from node iruya-worker2 pod client-envvars-c51a97cf-525e-4cd9-8a1f-3dee85129e8b container env3cont: 
STEP: delete the pod
Aug  5 14:04:57.009: INFO: Waiting for pod client-envvars-c51a97cf-525e-4cd9-8a1f-3dee85129e8b to disappear
Aug  5 14:04:57.019: INFO: Pod client-envvars-c51a97cf-525e-4cd9-8a1f-3dee85129e8b no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 14:04:57.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2808" for this suite.
Aug  5 14:05:37.124: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 14:05:37.180: INFO: namespace pods-2808 deletion completed in 40.156300323s

• [SLOW TEST:48.435 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 14:05:37.181: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-4630/configmap-test-d49dcb2f-8a99-4251-8ca8-482ef198771c
STEP: Creating a pod to test consume configMaps
Aug  5 14:05:37.271: INFO: Waiting up to 5m0s for pod "pod-configmaps-1037a345-0d33-4c96-b075-3416d20e59f1" in namespace "configmap-4630" to be "success or failure"
Aug  5 14:05:37.276: INFO: Pod "pod-configmaps-1037a345-0d33-4c96-b075-3416d20e59f1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.304667ms
Aug  5 14:05:39.280: INFO: Pod "pod-configmaps-1037a345-0d33-4c96-b075-3416d20e59f1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008616925s
Aug  5 14:05:41.309: INFO: Pod "pod-configmaps-1037a345-0d33-4c96-b075-3416d20e59f1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038148445s
Aug  5 14:05:43.313: INFO: Pod "pod-configmaps-1037a345-0d33-4c96-b075-3416d20e59f1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.041857603s
STEP: Saw pod success
Aug  5 14:05:43.313: INFO: Pod "pod-configmaps-1037a345-0d33-4c96-b075-3416d20e59f1" satisfied condition "success or failure"
Aug  5 14:05:43.315: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-1037a345-0d33-4c96-b075-3416d20e59f1 container env-test: 
STEP: delete the pod
Aug  5 14:05:43.601: INFO: Waiting for pod pod-configmaps-1037a345-0d33-4c96-b075-3416d20e59f1 to disappear
Aug  5 14:05:43.611: INFO: Pod pod-configmaps-1037a345-0d33-4c96-b075-3416d20e59f1 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 14:05:43.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4630" for this suite.
Aug  5 14:05:49.719: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 14:05:49.787: INFO: namespace configmap-4630 deletion completed in 6.17306494s

• [SLOW TEST:12.606 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 14:05:49.788: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
Aug  5 14:05:50.350: INFO: created pod pod-service-account-defaultsa
Aug  5 14:05:50.350: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Aug  5 14:05:50.356: INFO: created pod pod-service-account-mountsa
Aug  5 14:05:50.356: INFO: pod pod-service-account-mountsa service account token volume mount: true
Aug  5 14:05:50.399: INFO: created pod pod-service-account-nomountsa
Aug  5 14:05:50.399: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Aug  5 14:05:50.404: INFO: created pod pod-service-account-defaultsa-mountspec
Aug  5 14:05:50.404: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Aug  5 14:05:50.454: INFO: created pod pod-service-account-mountsa-mountspec
Aug  5 14:05:50.454: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Aug  5 14:05:50.495: INFO: created pod pod-service-account-nomountsa-mountspec
Aug  5 14:05:50.495: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Aug  5 14:05:50.509: INFO: created pod pod-service-account-defaultsa-nomountspec
Aug  5 14:05:50.509: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Aug  5 14:05:50.518: INFO: created pod pod-service-account-mountsa-nomountspec
Aug  5 14:05:50.518: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Aug  5 14:05:50.540: INFO: created pod pod-service-account-nomountsa-nomountspec
Aug  5 14:05:50.540: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 14:05:50.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-2430" for this suite.
Aug  5 14:06:22.754: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 14:06:22.817: INFO: namespace svcaccounts-2430 deletion completed in 32.184186535s

• [SLOW TEST:33.029 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 14:06:22.817: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-883bd382-2c44-4124-84e2-35ae71561663
STEP: Creating a pod to test consume configMaps
Aug  5 14:06:22.897: INFO: Waiting up to 5m0s for pod "pod-configmaps-87664c52-1ffa-4a8f-a1af-dd73d7337dcc" in namespace "configmap-4985" to be "success or failure"
Aug  5 14:06:22.902: INFO: Pod "pod-configmaps-87664c52-1ffa-4a8f-a1af-dd73d7337dcc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.878746ms
Aug  5 14:06:24.980: INFO: Pod "pod-configmaps-87664c52-1ffa-4a8f-a1af-dd73d7337dcc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.083376829s
Aug  5 14:06:26.984: INFO: Pod "pod-configmaps-87664c52-1ffa-4a8f-a1af-dd73d7337dcc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.087102433s
Aug  5 14:06:29.430: INFO: Pod "pod-configmaps-87664c52-1ffa-4a8f-a1af-dd73d7337dcc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.533443999s
Aug  5 14:06:31.434: INFO: Pod "pod-configmaps-87664c52-1ffa-4a8f-a1af-dd73d7337dcc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.537234213s
Aug  5 14:06:33.616: INFO: Pod "pod-configmaps-87664c52-1ffa-4a8f-a1af-dd73d7337dcc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.719264916s
Aug  5 14:06:35.620: INFO: Pod "pod-configmaps-87664c52-1ffa-4a8f-a1af-dd73d7337dcc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.723089481s
STEP: Saw pod success
Aug  5 14:06:35.620: INFO: Pod "pod-configmaps-87664c52-1ffa-4a8f-a1af-dd73d7337dcc" satisfied condition "success or failure"
Aug  5 14:06:35.623: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-87664c52-1ffa-4a8f-a1af-dd73d7337dcc container configmap-volume-test: 
STEP: delete the pod
Aug  5 14:06:36.101: INFO: Waiting for pod pod-configmaps-87664c52-1ffa-4a8f-a1af-dd73d7337dcc to disappear
Aug  5 14:06:36.358: INFO: Pod pod-configmaps-87664c52-1ffa-4a8f-a1af-dd73d7337dcc no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 14:06:36.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4985" for this suite.
Aug  5 14:06:42.431: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 14:06:42.554: INFO: namespace configmap-4985 deletion completed in 6.184344714s

• [SLOW TEST:19.737 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 14:06:42.555: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug  5 14:06:42.706: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cf2c4eaa-21b8-4c99-a7fc-287a07dc92da" in namespace "downward-api-6083" to be "success or failure"
Aug  5 14:06:42.729: INFO: Pod "downwardapi-volume-cf2c4eaa-21b8-4c99-a7fc-287a07dc92da": Phase="Pending", Reason="", readiness=false. Elapsed: 22.167953ms
Aug  5 14:06:44.732: INFO: Pod "downwardapi-volume-cf2c4eaa-21b8-4c99-a7fc-287a07dc92da": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025509022s
Aug  5 14:06:46.735: INFO: Pod "downwardapi-volume-cf2c4eaa-21b8-4c99-a7fc-287a07dc92da": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028740468s
Aug  5 14:06:48.738: INFO: Pod "downwardapi-volume-cf2c4eaa-21b8-4c99-a7fc-287a07dc92da": Phase="Pending", Reason="", readiness=false. Elapsed: 6.031688296s
Aug  5 14:06:51.686: INFO: Pod "downwardapi-volume-cf2c4eaa-21b8-4c99-a7fc-287a07dc92da": Phase="Running", Reason="", readiness=true. Elapsed: 8.979061258s
Aug  5 14:06:53.689: INFO: Pod "downwardapi-volume-cf2c4eaa-21b8-4c99-a7fc-287a07dc92da": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.982599458s
STEP: Saw pod success
Aug  5 14:06:53.689: INFO: Pod "downwardapi-volume-cf2c4eaa-21b8-4c99-a7fc-287a07dc92da" satisfied condition "success or failure"
Aug  5 14:06:53.692: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-cf2c4eaa-21b8-4c99-a7fc-287a07dc92da container client-container: 
STEP: delete the pod
Aug  5 14:06:53.755: INFO: Waiting for pod downwardapi-volume-cf2c4eaa-21b8-4c99-a7fc-287a07dc92da to disappear
Aug  5 14:06:53.768: INFO: Pod downwardapi-volume-cf2c4eaa-21b8-4c99-a7fc-287a07dc92da no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 14:06:53.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6083" for this suite.
Aug  5 14:06:59.793: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 14:06:59.870: INFO: namespace downward-api-6083 deletion completed in 6.099666944s

• [SLOW TEST:17.315 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 14:06:59.870: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-8724/configmap-test-a8656bf5-9357-4340-8f09-06c303ec28f2
STEP: Creating a pod to test consume configMaps
Aug  5 14:06:59.950: INFO: Waiting up to 5m0s for pod "pod-configmaps-ffca9564-c131-48b4-9e87-239fb2f55fdd" in namespace "configmap-8724" to be "success or failure"
Aug  5 14:06:59.959: INFO: Pod "pod-configmaps-ffca9564-c131-48b4-9e87-239fb2f55fdd": Phase="Pending", Reason="", readiness=false. Elapsed: 9.182203ms
Aug  5 14:07:02.817: INFO: Pod "pod-configmaps-ffca9564-c131-48b4-9e87-239fb2f55fdd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.866978403s
Aug  5 14:07:04.821: INFO: Pod "pod-configmaps-ffca9564-c131-48b4-9e87-239fb2f55fdd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.87082937s
Aug  5 14:07:06.824: INFO: Pod "pod-configmaps-ffca9564-c131-48b4-9e87-239fb2f55fdd": Phase="Running", Reason="", readiness=true. Elapsed: 6.87364291s
Aug  5 14:07:08.827: INFO: Pod "pod-configmaps-ffca9564-c131-48b4-9e87-239fb2f55fdd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.876405214s
STEP: Saw pod success
Aug  5 14:07:08.827: INFO: Pod "pod-configmaps-ffca9564-c131-48b4-9e87-239fb2f55fdd" satisfied condition "success or failure"
Aug  5 14:07:08.829: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-ffca9564-c131-48b4-9e87-239fb2f55fdd container env-test: 
STEP: delete the pod
Aug  5 14:07:08.856: INFO: Waiting for pod pod-configmaps-ffca9564-c131-48b4-9e87-239fb2f55fdd to disappear
Aug  5 14:07:08.918: INFO: Pod pod-configmaps-ffca9564-c131-48b4-9e87-239fb2f55fdd no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 14:07:08.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8724" for this suite.
Aug  5 14:07:14.969: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 14:07:15.030: INFO: namespace configmap-8724 deletion completed in 6.107938684s

• [SLOW TEST:15.160 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 14:07:15.030: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Aug  5 14:07:15.109: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-4710'
Aug  5 14:07:15.197: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Aug  5 14:07:15.197: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562
Aug  5 14:07:17.351: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-4710'
Aug  5 14:07:17.555: INFO: stderr: ""
Aug  5 14:07:17.555: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 14:07:17.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4710" for this suite.
Aug  5 14:07:39.636: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 14:07:39.694: INFO: namespace kubectl-4710 deletion completed in 22.136151974s

• [SLOW TEST:24.664 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 14:07:39.694: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug  5 14:07:39.731: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 14:07:40.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-566" for this suite.
Aug  5 14:07:46.861: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 14:07:46.923: INFO: namespace custom-resource-definition-566 deletion completed in 6.076050787s

• [SLOW TEST:7.228 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 14:07:46.923: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 14:07:59.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-5636" for this suite.
Aug  5 14:08:05.068: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 14:08:05.126: INFO: namespace kubelet-test-5636 deletion completed in 6.074780941s

• [SLOW TEST:18.204 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 14:08:05.127: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0805 14:08:19.103628       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug  5 14:08:19.103: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 14:08:19.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-4886" for this suite.
Aug  5 14:08:27.116: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 14:08:27.183: INFO: namespace gc-4886 deletion completed in 8.076746953s

• [SLOW TEST:22.056 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 14:08:27.183: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-secret-vm4t
STEP: Creating a pod to test atomic-volume-subpath
Aug  5 14:08:27.319: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-vm4t" in namespace "subpath-7199" to be "success or failure"
Aug  5 14:08:27.324: INFO: Pod "pod-subpath-test-secret-vm4t": Phase="Pending", Reason="", readiness=false. Elapsed: 4.788228ms
Aug  5 14:08:29.327: INFO: Pod "pod-subpath-test-secret-vm4t": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007365609s
Aug  5 14:08:31.330: INFO: Pod "pod-subpath-test-secret-vm4t": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010855366s
Aug  5 14:08:33.333: INFO: Pod "pod-subpath-test-secret-vm4t": Phase="Running", Reason="", readiness=true. Elapsed: 6.013851004s
Aug  5 14:08:35.335: INFO: Pod "pod-subpath-test-secret-vm4t": Phase="Running", Reason="", readiness=true. Elapsed: 8.016099711s
Aug  5 14:08:37.338: INFO: Pod "pod-subpath-test-secret-vm4t": Phase="Running", Reason="", readiness=true. Elapsed: 10.018929613s
Aug  5 14:08:39.341: INFO: Pod "pod-subpath-test-secret-vm4t": Phase="Running", Reason="", readiness=true. Elapsed: 12.021816295s
Aug  5 14:08:41.344: INFO: Pod "pod-subpath-test-secret-vm4t": Phase="Running", Reason="", readiness=true. Elapsed: 14.024770222s
Aug  5 14:08:43.347: INFO: Pod "pod-subpath-test-secret-vm4t": Phase="Running", Reason="", readiness=true. Elapsed: 16.028204492s
Aug  5 14:08:45.350: INFO: Pod "pod-subpath-test-secret-vm4t": Phase="Running", Reason="", readiness=true. Elapsed: 18.03067849s
Aug  5 14:08:47.352: INFO: Pod "pod-subpath-test-secret-vm4t": Phase="Running", Reason="", readiness=true. Elapsed: 20.033041112s
Aug  5 14:08:49.355: INFO: Pod "pod-subpath-test-secret-vm4t": Phase="Running", Reason="", readiness=true. Elapsed: 22.035604266s
Aug  5 14:08:51.358: INFO: Pod "pod-subpath-test-secret-vm4t": Phase="Running", Reason="", readiness=true. Elapsed: 24.03903219s
Aug  5 14:08:53.622: INFO: Pod "pod-subpath-test-secret-vm4t": Phase="Running", Reason="", readiness=true. Elapsed: 26.302646836s
Aug  5 14:08:55.626: INFO: Pod "pod-subpath-test-secret-vm4t": Phase="Running", Reason="", readiness=true. Elapsed: 28.306721338s
Aug  5 14:08:57.714: INFO: Pod "pod-subpath-test-secret-vm4t": Phase="Running", Reason="", readiness=true. Elapsed: 30.39468671s
Aug  5 14:08:59.717: INFO: Pod "pod-subpath-test-secret-vm4t": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.39796017s
STEP: Saw pod success
Aug  5 14:08:59.717: INFO: Pod "pod-subpath-test-secret-vm4t" satisfied condition "success or failure"
Aug  5 14:08:59.719: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-secret-vm4t container test-container-subpath-secret-vm4t: 
STEP: delete the pod
Aug  5 14:08:59.735: INFO: Waiting for pod pod-subpath-test-secret-vm4t to disappear
Aug  5 14:08:59.753: INFO: Pod pod-subpath-test-secret-vm4t no longer exists
STEP: Deleting pod pod-subpath-test-secret-vm4t
Aug  5 14:08:59.753: INFO: Deleting pod "pod-subpath-test-secret-vm4t" in namespace "subpath-7199"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 14:08:59.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-7199" for this suite.
Aug  5 14:09:05.797: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 14:09:05.857: INFO: namespace subpath-7199 deletion completed in 6.075105166s

• [SLOW TEST:38.674 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 14:09:05.857: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating secret secrets-1231/secret-test-646ec788-01d6-4fbe-9a34-464994a0b414
STEP: Creating a pod to test consume secrets
Aug  5 14:09:05.909: INFO: Waiting up to 5m0s for pod "pod-configmaps-ae246649-a9b0-4894-b4b5-cc2e31198185" in namespace "secrets-1231" to be "success or failure"
Aug  5 14:09:05.914: INFO: Pod "pod-configmaps-ae246649-a9b0-4894-b4b5-cc2e31198185": Phase="Pending", Reason="", readiness=false. Elapsed: 4.742133ms
Aug  5 14:09:07.917: INFO: Pod "pod-configmaps-ae246649-a9b0-4894-b4b5-cc2e31198185": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008099175s
Aug  5 14:09:10.104: INFO: Pod "pod-configmaps-ae246649-a9b0-4894-b4b5-cc2e31198185": Phase="Pending", Reason="", readiness=false. Elapsed: 4.194833578s
Aug  5 14:09:12.108: INFO: Pod "pod-configmaps-ae246649-a9b0-4894-b4b5-cc2e31198185": Phase="Running", Reason="", readiness=true. Elapsed: 6.199069952s
Aug  5 14:09:14.112: INFO: Pod "pod-configmaps-ae246649-a9b0-4894-b4b5-cc2e31198185": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.20290192s
STEP: Saw pod success
Aug  5 14:09:14.112: INFO: Pod "pod-configmaps-ae246649-a9b0-4894-b4b5-cc2e31198185" satisfied condition "success or failure"
Aug  5 14:09:14.115: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-ae246649-a9b0-4894-b4b5-cc2e31198185 container env-test: 
STEP: delete the pod
Aug  5 14:09:15.443: INFO: Waiting for pod pod-configmaps-ae246649-a9b0-4894-b4b5-cc2e31198185 to disappear
Aug  5 14:09:15.469: INFO: Pod pod-configmaps-ae246649-a9b0-4894-b4b5-cc2e31198185 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 14:09:15.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1231" for this suite.
Aug  5 14:09:21.585: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 14:09:21.647: INFO: namespace secrets-1231 deletion completed in 6.175156559s

• [SLOW TEST:15.789 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 14:09:21.647: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug  5 14:09:21.679: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2668'
Aug  5 14:09:22.003: INFO: stderr: ""
Aug  5 14:09:22.003: INFO: stdout: "replicationcontroller/redis-master created\n"
Aug  5 14:09:22.003: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2668'
Aug  5 14:09:22.307: INFO: stderr: ""
Aug  5 14:09:22.307: INFO: stdout: "service/redis-master created\n"
STEP: Waiting for Redis master to start.
Aug  5 14:09:23.310: INFO: Selector matched 1 pods for map[app:redis]
Aug  5 14:09:23.310: INFO: Found 0 / 1
Aug  5 14:09:24.660: INFO: Selector matched 1 pods for map[app:redis]
Aug  5 14:09:24.660: INFO: Found 0 / 1
Aug  5 14:09:25.310: INFO: Selector matched 1 pods for map[app:redis]
Aug  5 14:09:25.311: INFO: Found 0 / 1
Aug  5 14:09:26.337: INFO: Selector matched 1 pods for map[app:redis]
Aug  5 14:09:26.337: INFO: Found 0 / 1
Aug  5 14:09:27.311: INFO: Selector matched 1 pods for map[app:redis]
Aug  5 14:09:27.311: INFO: Found 0 / 1
Aug  5 14:09:28.403: INFO: Selector matched 1 pods for map[app:redis]
Aug  5 14:09:28.403: INFO: Found 0 / 1
Aug  5 14:09:29.310: INFO: Selector matched 1 pods for map[app:redis]
Aug  5 14:09:29.310: INFO: Found 0 / 1
Aug  5 14:09:30.382: INFO: Selector matched 1 pods for map[app:redis]
Aug  5 14:09:30.382: INFO: Found 0 / 1
Aug  5 14:09:31.310: INFO: Selector matched 1 pods for map[app:redis]
Aug  5 14:09:31.310: INFO: Found 0 / 1
Aug  5 14:09:32.311: INFO: Selector matched 1 pods for map[app:redis]
Aug  5 14:09:32.311: INFO: Found 1 / 1
Aug  5 14:09:32.311: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Aug  5 14:09:32.313: INFO: Selector matched 1 pods for map[app:redis]
Aug  5 14:09:32.313: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Aug  5 14:09:32.313: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-7cznk --namespace=kubectl-2668'
Aug  5 14:09:32.423: INFO: stderr: ""
Aug  5 14:09:32.423: INFO: stdout: "Name:           redis-master-7cznk\nNamespace:      kubectl-2668\nPriority:       0\nNode:           iruya-worker2/172.18.0.7\nStart Time:     Wed, 05 Aug 2020 14:09:22 +0000\nLabels:         app=redis\n                role=master\nAnnotations:    \nStatus:         Running\nIP:             10.244.2.171\nControlled By:  ReplicationController/redis-master\nContainers:\n  redis-master:\n    Container ID:   containerd://45b58bcd915b5afd32aa1b5321d29cfe17442223346103466fc681bfa3a98bf9\n    Image:          gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Image ID:       gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Wed, 05 Aug 2020 14:09:31 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-hx6kd (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-hx6kd:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-hx6kd\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From                    Message\n  ----    ------     ----  ----                    -------\n  Normal  Scheduled  10s   default-scheduler       Successfully assigned kubectl-2668/redis-master-7cznk to iruya-worker2\n  Normal  Pulled     9s    kubelet, iruya-worker2  Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n  Normal  Created    1s    kubelet, iruya-worker2  Created container redis-master\n  Normal  Started    1s    kubelet, iruya-worker2  Started container redis-master\n"
Aug  5 14:09:32.423: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-2668'
Aug  5 14:09:32.524: INFO: stderr: ""
Aug  5 14:09:32.524: INFO: stdout: "Name:         redis-master\nNamespace:    kubectl-2668\nSelector:     app=redis,role=master\nLabels:       app=redis\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=redis\n           role=master\n  Containers:\n   redis-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  10s   replication-controller  Created pod: redis-master-7cznk\n"
Aug  5 14:09:32.524: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-2668'
Aug  5 14:09:32.612: INFO: stderr: ""
Aug  5 14:09:32.612: INFO: stdout: "Name:              redis-master\nNamespace:         kubectl-2668\nLabels:            app=redis\n                   role=master\nAnnotations:       \nSelector:          app=redis,role=master\nType:              ClusterIP\nIP:                10.103.121.107\nPort:                6379/TCP\nTargetPort:        redis-server/TCP\nEndpoints:         10.244.2.171:6379\nSession Affinity:  None\nEvents:            \n"
Aug  5 14:09:32.615: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-control-plane'
Aug  5 14:09:32.715: INFO: stderr: ""
Aug  5 14:09:32.715: INFO: stdout: "Name:               iruya-control-plane\nRoles:              master\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=iruya-control-plane\n                    kubernetes.io/os=linux\n                    node-role.kubernetes.io/master=\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sun, 19 Jul 2020 21:15:33 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\nUnschedulable:      false\nConditions:\n  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----             ------  -----------------                 ------------------                ------                       -------\n  MemoryPressure   False   Wed, 05 Aug 2020 14:09:22 +0000   Sun, 19 Jul 2020 21:15:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure     False   Wed, 05 Aug 2020 14:09:22 +0000   Sun, 19 Jul 2020 21:15:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure      False   Wed, 05 Aug 2020 14:09:22 +0000   Sun, 19 Jul 2020 21:15:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready            True    Wed, 05 Aug 2020 14:09:22 +0000   Sun, 19 Jul 2020 21:16:03 +0000   KubeletReady                 kubelet is posting ready status\nAddresses:\n  InternalIP:  172.18.0.9\n  Hostname:    iruya-control-plane\nCapacity:\n cpu:                16\n ephemeral-storage:  2303189964Ki\n hugepages-1Gi:      0\n hugepages-2Mi:      0\n memory:             131759872Ki\n pods:               110\nAllocatable:\n cpu:                16\n ephemeral-storage:  2303189964Ki\n hugepages-1Gi:      0\n hugepages-2Mi:      0\n memory:             131759872Ki\n pods:               110\nSystem Info:\n Machine ID:                 ca83ac9a93d54502bb9afb972c3f1f0b\n System UUID:                1d4ac873-683f-4805-8579-15bbb4e4df77\n Boot ID:                    11738d2d-5baa-4089-8e7f-2fb0329fce58\n Kernel Version:             4.15.0-109-generic\n OS Image:                   Ubuntu 20.04 LTS\n Operating System:           linux\n Architecture:               amd64\n Container Runtime Version:  containerd://1.4.0-beta.1-85-g334f567e\n Kubelet Version:            v1.15.12\n Kube-Proxy Version:         v1.15.12\nPodCIDR:                     10.244.0.0/24\nNon-terminated Pods:         (9 in total)\n  Namespace                  Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                  ----                                           ------------  ----------  ---------------  -------------  ---\n  kube-system                coredns-5d4dd4b4db-clz9n                       100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     16d\n  kube-system                coredns-5d4dd4b4db-w42x4                       100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     16d\n  kube-system                etcd-iruya-control-plane                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         16d\n  kube-system                kindnet-xbjsm                                  100m (0%)     100m (0%)   50Mi (0%)        50Mi (0%)      16d\n  kube-system                kube-apiserver-iruya-control-plane             250m (1%)     0 (0%)      0 (0%)           0 (0%)         16d\n  kube-system                kube-controller-manager-iruya-control-plane    200m (1%)     0 (0%)      0 (0%)           0 (0%)         16d\n  kube-system                kube-proxy-nwhvb                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         16d\n  kube-system                kube-scheduler-iruya-control-plane             100m (0%)     0 (0%)      0 (0%)           0 (0%)         16d\n  local-path-storage         local-path-provisioner-668779bd7-sf66r         0 (0%)        0 (0%)      0 (0%)           0 (0%)         16d\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests    Limits\n  --------           --------    ------\n  cpu                850m (5%)   100m (0%)\n  memory             190Mi (0%)  390Mi (0%)\n  ephemeral-storage  0 (0%)      0 (0%)\nEvents:              \n"
Aug  5 14:09:32.716: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-2668'
Aug  5 14:09:32.818: INFO: stderr: ""
Aug  5 14:09:32.818: INFO: stdout: "Name:         kubectl-2668\nLabels:       e2e-framework=kubectl\n              e2e-run=c242b5bc-99b4-4980-bd2d-5cd4ac7b2498\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo resource limits.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 14:09:32.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2668" for this suite.
Aug  5 14:09:54.837: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 14:09:54.890: INFO: namespace kubectl-2668 deletion completed in 22.069482916s

• [SLOW TEST:33.243 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 14:09:54.890: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-ebbe92b8-98e2-4a4e-9588-ff11995be674 in namespace container-probe-4686
Aug  5 14:10:09.045: INFO: Started pod liveness-ebbe92b8-98e2-4a4e-9588-ff11995be674 in namespace container-probe-4686
STEP: checking the pod's current state and verifying that restartCount is present
Aug  5 14:10:09.047: INFO: Initial restart count of pod liveness-ebbe92b8-98e2-4a4e-9588-ff11995be674 is 0
Aug  5 14:10:31.241: INFO: Restart count of pod container-probe-4686/liveness-ebbe92b8-98e2-4a4e-9588-ff11995be674 is now 1 (22.194166738s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 14:10:31.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-4686" for this suite.
Aug  5 14:10:39.167: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 14:10:39.279: INFO: namespace container-probe-4686 deletion completed in 7.942233804s

• [SLOW TEST:44.388 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 14:10:39.279: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug  5 14:10:39.968: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Aug  5 14:10:44.971: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Aug  5 14:10:48.977: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Aug  5 14:10:49.088: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-7277,SelfLink:/apis/apps/v1/namespaces/deployment-7277/deployments/test-cleanup-deployment,UID:48c103a3-ab3d-4ae1-8edf-9cf66770d8a9,ResourceVersion:3104460,Generation:1,CreationTimestamp:2020-08-05 14:10:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}

Aug  5 14:10:49.095: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-7277,SelfLink:/apis/apps/v1/namespaces/deployment-7277/replicasets/test-cleanup-deployment-55bbcbc84c,UID:851fb12b-4044-4fd9-bdf6-fca7b6541696,ResourceVersion:3104462,Generation:1,CreationTimestamp:2020-08-05 14:10:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 48c103a3-ab3d-4ae1-8edf-9cf66770d8a9 0xc00287c897 0xc00287c898}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Aug  5 14:10:49.095: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment":
Aug  5 14:10:49.095: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-7277,SelfLink:/apis/apps/v1/namespaces/deployment-7277/replicasets/test-cleanup-controller,UID:707a754f-4686-4717-b389-de096f2759d2,ResourceVersion:3104461,Generation:1,CreationTimestamp:2020-08-05 14:10:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 48c103a3-ab3d-4ae1-8edf-9cf66770d8a9 0xc00287c707 0xc00287c708}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Aug  5 14:10:49.118: INFO: Pod "test-cleanup-controller-xw5h8" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-xw5h8,GenerateName:test-cleanup-controller-,Namespace:deployment-7277,SelfLink:/api/v1/namespaces/deployment-7277/pods/test-cleanup-controller-xw5h8,UID:e1d56e34-d7d8-413c-8d31-c98b42813895,ResourceVersion:3104456,Generation:0,CreationTimestamp:2020-08-05 14:10:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 707a754f-4686-4717-b389-de096f2759d2 0xc00287d187 0xc00287d188}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qrc8w {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qrc8w,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-qrc8w true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00287d200} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00287d220}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:10:40 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:10:47 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:10:47 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:10:39 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.7,PodIP:10.244.2.172,StartTime:2020-08-05 14:10:40 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-05 14:10:47 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://7ba0960234b45f17e17e71ec11e39454259b59e360afffb63fb361b6f87a984a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug  5 14:10:49.118: INFO: Pod "test-cleanup-deployment-55bbcbc84c-xtglf" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-xtglf,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-7277,SelfLink:/api/v1/namespaces/deployment-7277/pods/test-cleanup-deployment-55bbcbc84c-xtglf,UID:c04e1bca-6d21-482a-b9db-27640dbb5e64,ResourceVersion:3104467,Generation:0,CreationTimestamp:2020-08-05 14:10:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c 851fb12b-4044-4fd9-bdf6-fca7b6541696 0xc00287d307 0xc00287d308}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qrc8w {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qrc8w,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-qrc8w true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00287d380} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00287d3a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:10:49 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 14:10:49.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-7277" for this suite.
Aug  5 14:10:59.326: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 14:10:59.378: INFO: namespace deployment-7277 deletion completed in 10.232824308s

• [SLOW TEST:20.099 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 14:10:59.379: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name secret-emptykey-test-3fe0f3db-0771-4855-aa5c-25ba13583479
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 14:11:00.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1938" for this suite.
Aug  5 14:11:06.285: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 14:11:06.341: INFO: namespace secrets-1938 deletion completed in 6.06384645s

• [SLOW TEST:6.962 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 14:11:06.341: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2764.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-2764.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2764.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-2764.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2764.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-2764.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2764.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-2764.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2764.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-2764.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2764.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-2764.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2764.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 246.136.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.136.246_udp@PTR;check="$$(dig +tcp +noall +answer +search 246.136.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.136.246_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2764.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-2764.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2764.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-2764.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2764.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-2764.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2764.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-2764.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2764.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-2764.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2764.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-2764.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2764.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 246.136.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.136.246_udp@PTR;check="$$(dig +tcp +noall +answer +search 246.136.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.136.246_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug  5 14:11:18.864: INFO: Unable to read wheezy_udp@dns-test-service.dns-2764.svc.cluster.local from pod dns-2764/dns-test-19b57231-a389-4e25-b6e7-bf08dee17251: the server could not find the requested resource (get pods dns-test-19b57231-a389-4e25-b6e7-bf08dee17251)
Aug  5 14:11:18.867: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2764.svc.cluster.local from pod dns-2764/dns-test-19b57231-a389-4e25-b6e7-bf08dee17251: the server could not find the requested resource (get pods dns-test-19b57231-a389-4e25-b6e7-bf08dee17251)
Aug  5 14:11:18.870: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2764.svc.cluster.local from pod dns-2764/dns-test-19b57231-a389-4e25-b6e7-bf08dee17251: the server could not find the requested resource (get pods dns-test-19b57231-a389-4e25-b6e7-bf08dee17251)
Aug  5 14:11:18.873: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2764.svc.cluster.local from pod dns-2764/dns-test-19b57231-a389-4e25-b6e7-bf08dee17251: the server could not find the requested resource (get pods dns-test-19b57231-a389-4e25-b6e7-bf08dee17251)
Aug  5 14:11:18.891: INFO: Unable to read jessie_udp@dns-test-service.dns-2764.svc.cluster.local from pod dns-2764/dns-test-19b57231-a389-4e25-b6e7-bf08dee17251: the server could not find the requested resource (get pods dns-test-19b57231-a389-4e25-b6e7-bf08dee17251)
Aug  5 14:11:18.894: INFO: Unable to read jessie_tcp@dns-test-service.dns-2764.svc.cluster.local from pod dns-2764/dns-test-19b57231-a389-4e25-b6e7-bf08dee17251: the server could not find the requested resource (get pods dns-test-19b57231-a389-4e25-b6e7-bf08dee17251)
Aug  5 14:11:18.896: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2764.svc.cluster.local from pod dns-2764/dns-test-19b57231-a389-4e25-b6e7-bf08dee17251: the server could not find the requested resource (get pods dns-test-19b57231-a389-4e25-b6e7-bf08dee17251)
Aug  5 14:11:18.899: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2764.svc.cluster.local from pod dns-2764/dns-test-19b57231-a389-4e25-b6e7-bf08dee17251: the server could not find the requested resource (get pods dns-test-19b57231-a389-4e25-b6e7-bf08dee17251)
Aug  5 14:11:18.914: INFO: Lookups using dns-2764/dns-test-19b57231-a389-4e25-b6e7-bf08dee17251 failed for: [wheezy_udp@dns-test-service.dns-2764.svc.cluster.local wheezy_tcp@dns-test-service.dns-2764.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2764.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2764.svc.cluster.local jessie_udp@dns-test-service.dns-2764.svc.cluster.local jessie_tcp@dns-test-service.dns-2764.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2764.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2764.svc.cluster.local]

Aug  5 14:11:23.918: INFO: Unable to read wheezy_udp@dns-test-service.dns-2764.svc.cluster.local from pod dns-2764/dns-test-19b57231-a389-4e25-b6e7-bf08dee17251: the server could not find the requested resource (get pods dns-test-19b57231-a389-4e25-b6e7-bf08dee17251)
Aug  5 14:11:23.920: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2764.svc.cluster.local from pod dns-2764/dns-test-19b57231-a389-4e25-b6e7-bf08dee17251: the server could not find the requested resource (get pods dns-test-19b57231-a389-4e25-b6e7-bf08dee17251)
Aug  5 14:11:23.923: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2764.svc.cluster.local from pod dns-2764/dns-test-19b57231-a389-4e25-b6e7-bf08dee17251: the server could not find the requested resource (get pods dns-test-19b57231-a389-4e25-b6e7-bf08dee17251)
Aug  5 14:11:23.925: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2764.svc.cluster.local from pod dns-2764/dns-test-19b57231-a389-4e25-b6e7-bf08dee17251: the server could not find the requested resource (get pods dns-test-19b57231-a389-4e25-b6e7-bf08dee17251)
Aug  5 14:11:23.942: INFO: Unable to read jessie_udp@dns-test-service.dns-2764.svc.cluster.local from pod dns-2764/dns-test-19b57231-a389-4e25-b6e7-bf08dee17251: the server could not find the requested resource (get pods dns-test-19b57231-a389-4e25-b6e7-bf08dee17251)
Aug  5 14:11:23.945: INFO: Unable to read jessie_tcp@dns-test-service.dns-2764.svc.cluster.local from pod dns-2764/dns-test-19b57231-a389-4e25-b6e7-bf08dee17251: the server could not find the requested resource (get pods dns-test-19b57231-a389-4e25-b6e7-bf08dee17251)
Aug  5 14:11:23.947: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2764.svc.cluster.local from pod dns-2764/dns-test-19b57231-a389-4e25-b6e7-bf08dee17251: the server could not find the requested resource (get pods dns-test-19b57231-a389-4e25-b6e7-bf08dee17251)
Aug  5 14:11:23.949: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2764.svc.cluster.local from pod dns-2764/dns-test-19b57231-a389-4e25-b6e7-bf08dee17251: the server could not find the requested resource (get pods dns-test-19b57231-a389-4e25-b6e7-bf08dee17251)
Aug  5 14:11:23.962: INFO: Lookups using dns-2764/dns-test-19b57231-a389-4e25-b6e7-bf08dee17251 failed for: [wheezy_udp@dns-test-service.dns-2764.svc.cluster.local wheezy_tcp@dns-test-service.dns-2764.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2764.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2764.svc.cluster.local jessie_udp@dns-test-service.dns-2764.svc.cluster.local jessie_tcp@dns-test-service.dns-2764.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2764.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2764.svc.cluster.local]

Aug  5 14:11:28.918: INFO: Unable to read wheezy_udp@dns-test-service.dns-2764.svc.cluster.local from pod dns-2764/dns-test-19b57231-a389-4e25-b6e7-bf08dee17251: the server could not find the requested resource (get pods dns-test-19b57231-a389-4e25-b6e7-bf08dee17251)
Aug  5 14:11:28.922: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2764.svc.cluster.local from pod dns-2764/dns-test-19b57231-a389-4e25-b6e7-bf08dee17251: the server could not find the requested resource (get pods dns-test-19b57231-a389-4e25-b6e7-bf08dee17251)
Aug  5 14:11:28.924: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2764.svc.cluster.local from pod dns-2764/dns-test-19b57231-a389-4e25-b6e7-bf08dee17251: the server could not find the requested resource (get pods dns-test-19b57231-a389-4e25-b6e7-bf08dee17251)
Aug  5 14:11:28.927: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2764.svc.cluster.local from pod dns-2764/dns-test-19b57231-a389-4e25-b6e7-bf08dee17251: the server could not find the requested resource (get pods dns-test-19b57231-a389-4e25-b6e7-bf08dee17251)
Aug  5 14:11:28.944: INFO: Unable to read jessie_udp@dns-test-service.dns-2764.svc.cluster.local from pod dns-2764/dns-test-19b57231-a389-4e25-b6e7-bf08dee17251: the server could not find the requested resource (get pods dns-test-19b57231-a389-4e25-b6e7-bf08dee17251)
Aug  5 14:11:28.947: INFO: Unable to read jessie_tcp@dns-test-service.dns-2764.svc.cluster.local from pod dns-2764/dns-test-19b57231-a389-4e25-b6e7-bf08dee17251: the server could not find the requested resource (get pods dns-test-19b57231-a389-4e25-b6e7-bf08dee17251)
Aug  5 14:11:28.949: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2764.svc.cluster.local from pod dns-2764/dns-test-19b57231-a389-4e25-b6e7-bf08dee17251: the server could not find the requested resource (get pods dns-test-19b57231-a389-4e25-b6e7-bf08dee17251)
Aug  5 14:11:28.951: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2764.svc.cluster.local from pod dns-2764/dns-test-19b57231-a389-4e25-b6e7-bf08dee17251: the server could not find the requested resource (get pods dns-test-19b57231-a389-4e25-b6e7-bf08dee17251)
Aug  5 14:11:28.965: INFO: Lookups using dns-2764/dns-test-19b57231-a389-4e25-b6e7-bf08dee17251 failed for: [wheezy_udp@dns-test-service.dns-2764.svc.cluster.local wheezy_tcp@dns-test-service.dns-2764.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2764.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2764.svc.cluster.local jessie_udp@dns-test-service.dns-2764.svc.cluster.local jessie_tcp@dns-test-service.dns-2764.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2764.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2764.svc.cluster.local]

Aug  5 14:11:33.922: INFO: Unable to read wheezy_udp@dns-test-service.dns-2764.svc.cluster.local from pod dns-2764/dns-test-19b57231-a389-4e25-b6e7-bf08dee17251: the server could not find the requested resource (get pods dns-test-19b57231-a389-4e25-b6e7-bf08dee17251)
Aug  5 14:11:33.925: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2764.svc.cluster.local from pod dns-2764/dns-test-19b57231-a389-4e25-b6e7-bf08dee17251: the server could not find the requested resource (get pods dns-test-19b57231-a389-4e25-b6e7-bf08dee17251)
Aug  5 14:11:33.927: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2764.svc.cluster.local from pod dns-2764/dns-test-19b57231-a389-4e25-b6e7-bf08dee17251: the server could not find the requested resource (get pods dns-test-19b57231-a389-4e25-b6e7-bf08dee17251)
Aug  5 14:11:33.930: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2764.svc.cluster.local from pod dns-2764/dns-test-19b57231-a389-4e25-b6e7-bf08dee17251: the server could not find the requested resource (get pods dns-test-19b57231-a389-4e25-b6e7-bf08dee17251)
Aug  5 14:11:33.947: INFO: Unable to read jessie_udp@dns-test-service.dns-2764.svc.cluster.local from pod dns-2764/dns-test-19b57231-a389-4e25-b6e7-bf08dee17251: the server could not find the requested resource (get pods dns-test-19b57231-a389-4e25-b6e7-bf08dee17251)
Aug  5 14:11:33.949: INFO: Unable to read jessie_tcp@dns-test-service.dns-2764.svc.cluster.local from pod dns-2764/dns-test-19b57231-a389-4e25-b6e7-bf08dee17251: the server could not find the requested resource (get pods dns-test-19b57231-a389-4e25-b6e7-bf08dee17251)
Aug  5 14:11:33.950: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2764.svc.cluster.local from pod dns-2764/dns-test-19b57231-a389-4e25-b6e7-bf08dee17251: the server could not find the requested resource (get pods dns-test-19b57231-a389-4e25-b6e7-bf08dee17251)
Aug  5 14:11:33.952: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2764.svc.cluster.local from pod dns-2764/dns-test-19b57231-a389-4e25-b6e7-bf08dee17251: the server could not find the requested resource (get pods dns-test-19b57231-a389-4e25-b6e7-bf08dee17251)
Aug  5 14:11:33.963: INFO: Lookups using dns-2764/dns-test-19b57231-a389-4e25-b6e7-bf08dee17251 failed for: [wheezy_udp@dns-test-service.dns-2764.svc.cluster.local wheezy_tcp@dns-test-service.dns-2764.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2764.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2764.svc.cluster.local jessie_udp@dns-test-service.dns-2764.svc.cluster.local jessie_tcp@dns-test-service.dns-2764.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2764.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2764.svc.cluster.local]

Aug  5 14:11:38.919: INFO: Unable to read wheezy_udp@dns-test-service.dns-2764.svc.cluster.local from pod dns-2764/dns-test-19b57231-a389-4e25-b6e7-bf08dee17251: the server could not find the requested resource (get pods dns-test-19b57231-a389-4e25-b6e7-bf08dee17251)
Aug  5 14:11:38.922: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2764.svc.cluster.local from pod dns-2764/dns-test-19b57231-a389-4e25-b6e7-bf08dee17251: the server could not find the requested resource (get pods dns-test-19b57231-a389-4e25-b6e7-bf08dee17251)
Aug  5 14:11:38.925: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2764.svc.cluster.local from pod dns-2764/dns-test-19b57231-a389-4e25-b6e7-bf08dee17251: the server could not find the requested resource (get pods dns-test-19b57231-a389-4e25-b6e7-bf08dee17251)
Aug  5 14:11:38.928: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2764.svc.cluster.local from pod dns-2764/dns-test-19b57231-a389-4e25-b6e7-bf08dee17251: the server could not find the requested resource (get pods dns-test-19b57231-a389-4e25-b6e7-bf08dee17251)
Aug  5 14:11:38.947: INFO: Unable to read jessie_udp@dns-test-service.dns-2764.svc.cluster.local from pod dns-2764/dns-test-19b57231-a389-4e25-b6e7-bf08dee17251: the server could not find the requested resource (get pods dns-test-19b57231-a389-4e25-b6e7-bf08dee17251)
Aug  5 14:11:38.949: INFO: Unable to read jessie_tcp@dns-test-service.dns-2764.svc.cluster.local from pod dns-2764/dns-test-19b57231-a389-4e25-b6e7-bf08dee17251: the server could not find the requested resource (get pods dns-test-19b57231-a389-4e25-b6e7-bf08dee17251)
Aug  5 14:11:38.950: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2764.svc.cluster.local from pod dns-2764/dns-test-19b57231-a389-4e25-b6e7-bf08dee17251: the server could not find the requested resource (get pods dns-test-19b57231-a389-4e25-b6e7-bf08dee17251)
Aug  5 14:11:38.953: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2764.svc.cluster.local from pod dns-2764/dns-test-19b57231-a389-4e25-b6e7-bf08dee17251: the server could not find the requested resource (get pods dns-test-19b57231-a389-4e25-b6e7-bf08dee17251)
Aug  5 14:11:38.966: INFO: Lookups using dns-2764/dns-test-19b57231-a389-4e25-b6e7-bf08dee17251 failed for: [wheezy_udp@dns-test-service.dns-2764.svc.cluster.local wheezy_tcp@dns-test-service.dns-2764.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2764.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2764.svc.cluster.local jessie_udp@dns-test-service.dns-2764.svc.cluster.local jessie_tcp@dns-test-service.dns-2764.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2764.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2764.svc.cluster.local]

Aug  5 14:11:43.918: INFO: Unable to read wheezy_udp@dns-test-service.dns-2764.svc.cluster.local from pod dns-2764/dns-test-19b57231-a389-4e25-b6e7-bf08dee17251: the server could not find the requested resource (get pods dns-test-19b57231-a389-4e25-b6e7-bf08dee17251)
Aug  5 14:11:43.920: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2764.svc.cluster.local from pod dns-2764/dns-test-19b57231-a389-4e25-b6e7-bf08dee17251: the server could not find the requested resource (get pods dns-test-19b57231-a389-4e25-b6e7-bf08dee17251)
Aug  5 14:11:43.923: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2764.svc.cluster.local from pod dns-2764/dns-test-19b57231-a389-4e25-b6e7-bf08dee17251: the server could not find the requested resource (get pods dns-test-19b57231-a389-4e25-b6e7-bf08dee17251)
Aug  5 14:11:43.926: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2764.svc.cluster.local from pod dns-2764/dns-test-19b57231-a389-4e25-b6e7-bf08dee17251: the server could not find the requested resource (get pods dns-test-19b57231-a389-4e25-b6e7-bf08dee17251)
Aug  5 14:11:43.941: INFO: Unable to read jessie_udp@dns-test-service.dns-2764.svc.cluster.local from pod dns-2764/dns-test-19b57231-a389-4e25-b6e7-bf08dee17251: the server could not find the requested resource (get pods dns-test-19b57231-a389-4e25-b6e7-bf08dee17251)
Aug  5 14:11:44.004: INFO: Unable to read jessie_tcp@dns-test-service.dns-2764.svc.cluster.local from pod dns-2764/dns-test-19b57231-a389-4e25-b6e7-bf08dee17251: the server could not find the requested resource (get pods dns-test-19b57231-a389-4e25-b6e7-bf08dee17251)
Aug  5 14:11:44.007: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2764.svc.cluster.local from pod dns-2764/dns-test-19b57231-a389-4e25-b6e7-bf08dee17251: the server could not find the requested resource (get pods dns-test-19b57231-a389-4e25-b6e7-bf08dee17251)
Aug  5 14:11:44.009: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2764.svc.cluster.local from pod dns-2764/dns-test-19b57231-a389-4e25-b6e7-bf08dee17251: the server could not find the requested resource (get pods dns-test-19b57231-a389-4e25-b6e7-bf08dee17251)
Aug  5 14:11:44.028: INFO: Lookups using dns-2764/dns-test-19b57231-a389-4e25-b6e7-bf08dee17251 failed for: [wheezy_udp@dns-test-service.dns-2764.svc.cluster.local wheezy_tcp@dns-test-service.dns-2764.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2764.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2764.svc.cluster.local jessie_udp@dns-test-service.dns-2764.svc.cluster.local jessie_tcp@dns-test-service.dns-2764.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2764.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2764.svc.cluster.local]

Aug  5 14:11:48.957: INFO: DNS probes using dns-2764/dns-test-19b57231-a389-4e25-b6e7-bf08dee17251 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 14:11:53.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-2764" for this suite.
Aug  5 14:12:01.615: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 14:12:01.665: INFO: namespace dns-2764 deletion completed in 8.225290856s

• [SLOW TEST:55.324 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 14:12:01.665: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test use defaults
Aug  5 14:12:01.748: INFO: Waiting up to 5m0s for pod "client-containers-9addc661-5be6-4501-a292-eb55cbc4ef31" in namespace "containers-1799" to be "success or failure"
Aug  5 14:12:01.756: INFO: Pod "client-containers-9addc661-5be6-4501-a292-eb55cbc4ef31": Phase="Pending", Reason="", readiness=false. Elapsed: 8.095503ms
Aug  5 14:12:03.812: INFO: Pod "client-containers-9addc661-5be6-4501-a292-eb55cbc4ef31": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063940554s
Aug  5 14:12:05.815: INFO: Pod "client-containers-9addc661-5be6-4501-a292-eb55cbc4ef31": Phase="Running", Reason="", readiness=true. Elapsed: 4.06690789s
Aug  5 14:12:07.818: INFO: Pod "client-containers-9addc661-5be6-4501-a292-eb55cbc4ef31": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.070019418s
STEP: Saw pod success
Aug  5 14:12:07.818: INFO: Pod "client-containers-9addc661-5be6-4501-a292-eb55cbc4ef31" satisfied condition "success or failure"
Aug  5 14:12:07.821: INFO: Trying to get logs from node iruya-worker pod client-containers-9addc661-5be6-4501-a292-eb55cbc4ef31 container test-container: 
STEP: delete the pod
Aug  5 14:12:07.859: INFO: Waiting for pod client-containers-9addc661-5be6-4501-a292-eb55cbc4ef31 to disappear
Aug  5 14:12:07.868: INFO: Pod client-containers-9addc661-5be6-4501-a292-eb55cbc4ef31 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 14:12:07.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-1799" for this suite.
Aug  5 14:12:13.895: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 14:12:13.960: INFO: namespace containers-1799 deletion completed in 6.088672985s

• [SLOW TEST:12.294 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 14:12:13.960: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Aug  5 14:12:14.001: INFO: namespace kubectl-8327
Aug  5 14:12:14.001: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8327'
Aug  5 14:12:14.248: INFO: stderr: ""
Aug  5 14:12:14.248: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Aug  5 14:12:15.251: INFO: Selector matched 1 pods for map[app:redis]
Aug  5 14:12:15.252: INFO: Found 0 / 1
Aug  5 14:12:16.253: INFO: Selector matched 1 pods for map[app:redis]
Aug  5 14:12:16.253: INFO: Found 0 / 1
Aug  5 14:12:17.310: INFO: Selector matched 1 pods for map[app:redis]
Aug  5 14:12:17.310: INFO: Found 0 / 1
Aug  5 14:12:18.252: INFO: Selector matched 1 pods for map[app:redis]
Aug  5 14:12:18.252: INFO: Found 0 / 1
Aug  5 14:12:19.257: INFO: Selector matched 1 pods for map[app:redis]
Aug  5 14:12:19.257: INFO: Found 1 / 1
Aug  5 14:12:19.258: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Aug  5 14:12:19.260: INFO: Selector matched 1 pods for map[app:redis]
Aug  5 14:12:19.260: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Aug  5 14:12:19.260: INFO: wait on redis-master startup in kubectl-8327 
Aug  5 14:12:19.260: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-v6857 redis-master --namespace=kubectl-8327'
Aug  5 14:12:21.808: INFO: stderr: ""
Aug  5 14:12:21.809: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 05 Aug 14:12:18.449 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 05 Aug 14:12:18.449 # Server started, Redis version 3.2.12\n1:M 05 Aug 14:12:18.449 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 05 Aug 14:12:18.449 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Aug  5 14:12:21.809: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-8327'
Aug  5 14:12:22.013: INFO: stderr: ""
Aug  5 14:12:22.013: INFO: stdout: "service/rm2 exposed\n"
Aug  5 14:12:22.019: INFO: Service rm2 in namespace kubectl-8327 found.
STEP: exposing service
Aug  5 14:12:24.029: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-8327'
Aug  5 14:12:24.381: INFO: stderr: ""
Aug  5 14:12:24.381: INFO: stdout: "service/rm3 exposed\n"
Aug  5 14:12:24.397: INFO: Service rm3 in namespace kubectl-8327 found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 14:12:26.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8327" for this suite.
Aug  5 14:12:48.429: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 14:12:48.512: INFO: namespace kubectl-8327 deletion completed in 22.104896516s

• [SLOW TEST:34.552 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 14:12:48.512: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-8429
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Aug  5 14:12:48.583: INFO: Found 0 stateful pods, waiting for 3
Aug  5 14:12:58.587: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug  5 14:12:58.587: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug  5 14:12:58.587: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Aug  5 14:13:08.588: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug  5 14:13:08.588: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug  5 14:13:08.588: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Aug  5 14:13:08.615: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Aug  5 14:13:18.668: INFO: Updating stateful set ss2
Aug  5 14:13:18.694: INFO: Waiting for Pod statefulset-8429/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Aug  5 14:13:28.847: INFO: Found 2 stateful pods, waiting for 3
Aug  5 14:13:38.853: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug  5 14:13:38.853: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug  5 14:13:38.853: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Aug  5 14:13:38.878: INFO: Updating stateful set ss2
Aug  5 14:13:38.891: INFO: Waiting for Pod statefulset-8429/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Aug  5 14:13:48.925: INFO: Updating stateful set ss2
Aug  5 14:13:48.945: INFO: Waiting for StatefulSet statefulset-8429/ss2 to complete update
Aug  5 14:13:48.945: INFO: Waiting for Pod statefulset-8429/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Aug  5 14:13:58.952: INFO: Deleting all statefulset in ns statefulset-8429
Aug  5 14:13:58.955: INFO: Scaling statefulset ss2 to 0
Aug  5 14:14:28.986: INFO: Waiting for statefulset status.replicas updated to 0
Aug  5 14:14:28.989: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 14:14:29.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-8429" for this suite.
Aug  5 14:14:35.014: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 14:14:35.090: INFO: namespace statefulset-8429 deletion completed in 6.084778588s

• [SLOW TEST:106.578 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 14:14:35.090: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on node default medium
Aug  5 14:14:35.181: INFO: Waiting up to 5m0s for pod "pod-5ef570c7-2971-49f2-a684-3985334cdd84" in namespace "emptydir-9726" to be "success or failure"
Aug  5 14:14:35.197: INFO: Pod "pod-5ef570c7-2971-49f2-a684-3985334cdd84": Phase="Pending", Reason="", readiness=false. Elapsed: 16.104086ms
Aug  5 14:14:37.201: INFO: Pod "pod-5ef570c7-2971-49f2-a684-3985334cdd84": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020081016s
Aug  5 14:14:39.206: INFO: Pod "pod-5ef570c7-2971-49f2-a684-3985334cdd84": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024606099s
STEP: Saw pod success
Aug  5 14:14:39.206: INFO: Pod "pod-5ef570c7-2971-49f2-a684-3985334cdd84" satisfied condition "success or failure"
Aug  5 14:14:39.208: INFO: Trying to get logs from node iruya-worker pod pod-5ef570c7-2971-49f2-a684-3985334cdd84 container test-container: 
STEP: delete the pod
Aug  5 14:14:39.234: INFO: Waiting for pod pod-5ef570c7-2971-49f2-a684-3985334cdd84 to disappear
Aug  5 14:14:39.239: INFO: Pod pod-5ef570c7-2971-49f2-a684-3985334cdd84 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 14:14:39.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9726" for this suite.
Aug  5 14:14:45.254: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 14:14:45.333: INFO: namespace emptydir-9726 deletion completed in 6.090947099s

• [SLOW TEST:10.243 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 14:14:45.334: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Aug  5 14:14:53.428: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug  5 14:14:53.461: INFO: Pod pod-with-prestop-exec-hook still exists
Aug  5 14:14:55.461: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug  5 14:14:55.466: INFO: Pod pod-with-prestop-exec-hook still exists
Aug  5 14:14:57.461: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug  5 14:14:57.466: INFO: Pod pod-with-prestop-exec-hook still exists
Aug  5 14:14:59.461: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug  5 14:14:59.466: INFO: Pod pod-with-prestop-exec-hook still exists
Aug  5 14:15:01.461: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug  5 14:15:01.465: INFO: Pod pod-with-prestop-exec-hook still exists
Aug  5 14:15:03.461: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug  5 14:15:03.465: INFO: Pod pod-with-prestop-exec-hook still exists
Aug  5 14:15:05.461: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug  5 14:15:05.466: INFO: Pod pod-with-prestop-exec-hook still exists
Aug  5 14:15:07.461: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug  5 14:15:07.466: INFO: Pod pod-with-prestop-exec-hook still exists
Aug  5 14:15:09.461: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug  5 14:15:09.466: INFO: Pod pod-with-prestop-exec-hook still exists
Aug  5 14:15:11.462: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug  5 14:15:11.468: INFO: Pod pod-with-prestop-exec-hook still exists
Aug  5 14:15:13.461: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug  5 14:15:13.465: INFO: Pod pod-with-prestop-exec-hook still exists
Aug  5 14:15:15.461: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug  5 14:15:15.466: INFO: Pod pod-with-prestop-exec-hook still exists
Aug  5 14:15:17.461: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug  5 14:15:17.466: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 14:15:17.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-4091" for this suite.
Aug  5 14:15:39.489: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 14:15:39.559: INFO: namespace container-lifecycle-hook-4091 deletion completed in 22.083714829s

• [SLOW TEST:54.226 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 14:15:39.560: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-773f062f-5152-4016-b746-1d065e736c01
STEP: Creating secret with name s-test-opt-upd-194ec298-f31a-4fc7-8e55-a4bd5b9f518b
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-773f062f-5152-4016-b746-1d065e736c01
STEP: Updating secret s-test-opt-upd-194ec298-f31a-4fc7-8e55-a4bd5b9f518b
STEP: Creating secret with name s-test-opt-create-ec070c80-b088-4260-9d31-c576ea42ab54
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 14:15:49.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2981" for this suite.
Aug  5 14:16:11.897: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 14:16:11.975: INFO: namespace projected-2981 deletion completed in 22.093198888s

• [SLOW TEST:32.415 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 14:16:11.976: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0805 14:16:42.587228       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug  5 14:16:42.587: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 14:16:42.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-9368" for this suite.
Aug  5 14:16:48.603: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 14:16:48.888: INFO: namespace gc-9368 deletion completed in 6.298064545s

• [SLOW TEST:36.912 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 14:16:48.889: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting the proxy server
Aug  5 14:16:48.974: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 14:16:49.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3122" for this suite.
Aug  5 14:16:55.071: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 14:16:55.141: INFO: namespace kubectl-3122 deletion completed in 6.08236441s

• [SLOW TEST:6.253 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 14:16:55.142: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug  5 14:16:55.234: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Aug  5 14:16:55.291: INFO: Number of nodes with available pods: 0
Aug  5 14:16:55.291: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Aug  5 14:16:55.315: INFO: Number of nodes with available pods: 0
Aug  5 14:16:55.315: INFO: Node iruya-worker is running more than one daemon pod
Aug  5 14:16:56.320: INFO: Number of nodes with available pods: 0
Aug  5 14:16:56.320: INFO: Node iruya-worker is running more than one daemon pod
Aug  5 14:16:57.320: INFO: Number of nodes with available pods: 0
Aug  5 14:16:57.320: INFO: Node iruya-worker is running more than one daemon pod
Aug  5 14:16:58.320: INFO: Number of nodes with available pods: 0
Aug  5 14:16:58.320: INFO: Node iruya-worker is running more than one daemon pod
Aug  5 14:16:59.320: INFO: Number of nodes with available pods: 1
Aug  5 14:16:59.320: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Aug  5 14:16:59.351: INFO: Number of nodes with available pods: 1
Aug  5 14:16:59.351: INFO: Number of running nodes: 0, number of available pods: 1
Aug  5 14:17:00.356: INFO: Number of nodes with available pods: 0
Aug  5 14:17:00.356: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Aug  5 14:17:00.368: INFO: Number of nodes with available pods: 0
Aug  5 14:17:00.368: INFO: Node iruya-worker is running more than one daemon pod
Aug  5 14:17:01.372: INFO: Number of nodes with available pods: 0
Aug  5 14:17:01.372: INFO: Node iruya-worker is running more than one daemon pod
Aug  5 14:17:02.372: INFO: Number of nodes with available pods: 0
Aug  5 14:17:02.372: INFO: Node iruya-worker is running more than one daemon pod
Aug  5 14:17:03.372: INFO: Number of nodes with available pods: 0
Aug  5 14:17:03.372: INFO: Node iruya-worker is running more than one daemon pod
Aug  5 14:17:04.373: INFO: Number of nodes with available pods: 0
Aug  5 14:17:04.373: INFO: Node iruya-worker is running more than one daemon pod
Aug  5 14:17:05.373: INFO: Number of nodes with available pods: 0
Aug  5 14:17:05.373: INFO: Node iruya-worker is running more than one daemon pod
Aug  5 14:17:06.373: INFO: Number of nodes with available pods: 0
Aug  5 14:17:06.373: INFO: Node iruya-worker is running more than one daemon pod
Aug  5 14:17:07.372: INFO: Number of nodes with available pods: 0
Aug  5 14:17:07.372: INFO: Node iruya-worker is running more than one daemon pod
Aug  5 14:17:08.373: INFO: Number of nodes with available pods: 1
Aug  5 14:17:08.373: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3577, will wait for the garbage collector to delete the pods
Aug  5 14:17:08.439: INFO: Deleting DaemonSet.extensions daemon-set took: 6.605482ms
Aug  5 14:17:08.539: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.245097ms
Aug  5 14:17:11.846: INFO: Number of nodes with available pods: 0
Aug  5 14:17:11.846: INFO: Number of running nodes: 0, number of available pods: 0
Aug  5 14:17:11.848: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3577/daemonsets","resourceVersion":"3105856"},"items":null}

Aug  5 14:17:11.849: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3577/pods","resourceVersion":"3105856"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 14:17:11.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-3577" for this suite.
Aug  5 14:17:17.923: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 14:17:17.993: INFO: namespace daemonsets-3577 deletion completed in 6.087851782s

• [SLOW TEST:22.851 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 14:17:17.994: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-17453349-cefa-49f5-b5f4-1ce027d22d16 in namespace container-probe-8817
Aug  5 14:17:22.096: INFO: Started pod liveness-17453349-cefa-49f5-b5f4-1ce027d22d16 in namespace container-probe-8817
STEP: checking the pod's current state and verifying that restartCount is present
Aug  5 14:17:22.098: INFO: Initial restart count of pod liveness-17453349-cefa-49f5-b5f4-1ce027d22d16 is 0
Aug  5 14:17:34.123: INFO: Restart count of pod container-probe-8817/liveness-17453349-cefa-49f5-b5f4-1ce027d22d16 is now 1 (12.024691362s elapsed)
Aug  5 14:17:54.164: INFO: Restart count of pod container-probe-8817/liveness-17453349-cefa-49f5-b5f4-1ce027d22d16 is now 2 (32.065736736s elapsed)
Aug  5 14:18:14.205: INFO: Restart count of pod container-probe-8817/liveness-17453349-cefa-49f5-b5f4-1ce027d22d16 is now 3 (52.106893165s elapsed)
Aug  5 14:18:34.296: INFO: Restart count of pod container-probe-8817/liveness-17453349-cefa-49f5-b5f4-1ce027d22d16 is now 4 (1m12.197659003s elapsed)
Aug  5 14:19:44.573: INFO: Restart count of pod container-probe-8817/liveness-17453349-cefa-49f5-b5f4-1ce027d22d16 is now 5 (2m22.474808134s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 14:19:44.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8817" for this suite.
Aug  5 14:19:50.633: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 14:19:50.711: INFO: namespace container-probe-8817 deletion completed in 6.088827088s

• [SLOW TEST:152.717 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 14:19:50.712: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Aug  5 14:19:50.785: INFO: Waiting up to 5m0s for pod "downward-api-e24ef1f0-48aa-4beb-ae52-cfedafe11cc0" in namespace "downward-api-4411" to be "success or failure"
Aug  5 14:19:50.869: INFO: Pod "downward-api-e24ef1f0-48aa-4beb-ae52-cfedafe11cc0": Phase="Pending", Reason="", readiness=false. Elapsed: 83.331113ms
Aug  5 14:19:52.873: INFO: Pod "downward-api-e24ef1f0-48aa-4beb-ae52-cfedafe11cc0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087579202s
Aug  5 14:19:54.877: INFO: Pod "downward-api-e24ef1f0-48aa-4beb-ae52-cfedafe11cc0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.091525743s
STEP: Saw pod success
Aug  5 14:19:54.877: INFO: Pod "downward-api-e24ef1f0-48aa-4beb-ae52-cfedafe11cc0" satisfied condition "success or failure"
Aug  5 14:19:54.880: INFO: Trying to get logs from node iruya-worker2 pod downward-api-e24ef1f0-48aa-4beb-ae52-cfedafe11cc0 container dapi-container: 
STEP: delete the pod
Aug  5 14:19:54.918: INFO: Waiting for pod downward-api-e24ef1f0-48aa-4beb-ae52-cfedafe11cc0 to disappear
Aug  5 14:19:54.928: INFO: Pod downward-api-e24ef1f0-48aa-4beb-ae52-cfedafe11cc0 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 14:19:54.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4411" for this suite.
Aug  5 14:20:00.944: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 14:20:01.017: INFO: namespace downward-api-4411 deletion completed in 6.085416302s

• [SLOW TEST:10.305 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 14:20:01.017: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug  5 14:20:01.104: INFO: Creating deployment "nginx-deployment"
Aug  5 14:20:01.109: INFO: Waiting for observed generation 1
Aug  5 14:20:03.120: INFO: Waiting for all required pods to come up
Aug  5 14:20:03.123: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Aug  5 14:20:13.138: INFO: Waiting for deployment "nginx-deployment" to complete
Aug  5 14:20:13.143: INFO: Updating deployment "nginx-deployment" with a non-existent image
Aug  5 14:20:13.149: INFO: Updating deployment nginx-deployment
Aug  5 14:20:13.149: INFO: Waiting for observed generation 2
Aug  5 14:20:15.184: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Aug  5 14:20:15.187: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Aug  5 14:20:15.189: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Aug  5 14:20:15.225: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Aug  5 14:20:15.225: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Aug  5 14:20:15.227: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Aug  5 14:20:15.231: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
Aug  5 14:20:15.231: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
Aug  5 14:20:15.237: INFO: Updating deployment nginx-deployment
Aug  5 14:20:15.237: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
Aug  5 14:20:15.264: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Aug  5 14:20:15.308: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Aug  5 14:20:15.570: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-4843,SelfLink:/apis/apps/v1/namespaces/deployment-4843/deployments/nginx-deployment,UID:b5ce8cb7-9815-4804-9093-a32533d6ecae,ResourceVersion:3106506,Generation:3,CreationTimestamp:2020-08-05 14:20:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-08-05 14:20:13 +0000 UTC 2020-08-05 14:20:01 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.} {Available False 2020-08-05 14:20:15 +0000 UTC 2020-08-05 14:20:15 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},}

Aug  5 14:20:15.641: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-4843,SelfLink:/apis/apps/v1/namespaces/deployment-4843/replicasets/nginx-deployment-55fb7cb77f,UID:c4f33d86-09d2-4b19-8b4f-20374135a082,ResourceVersion:3106541,Generation:3,CreationTimestamp:2020-08-05 14:20:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment b5ce8cb7-9815-4804-9093-a32533d6ecae 0xc002c89377 0xc002c89378}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Aug  5 14:20:15.641: INFO: All old ReplicaSets of Deployment "nginx-deployment":
Aug  5 14:20:15.641: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-4843,SelfLink:/apis/apps/v1/namespaces/deployment-4843/replicasets/nginx-deployment-7b8c6f4498,UID:05ffa777-7d74-4db1-b25a-3374dfeddd43,ResourceVersion:3106539,Generation:3,CreationTimestamp:2020-08-05 14:20:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment b5ce8cb7-9815-4804-9093-a32533d6ecae 0xc002c89467 0xc002c89468}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
Aug  5 14:20:15.711: INFO: Pod "nginx-deployment-55fb7cb77f-2qdfh" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-2qdfh,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4843,SelfLink:/api/v1/namespaces/deployment-4843/pods/nginx-deployment-55fb7cb77f-2qdfh,UID:0f35f8d6-783d-432a-adcf-8636b8c3809f,ResourceVersion:3106540,Generation:0,CreationTimestamp:2020-08-05 14:20:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f c4f33d86-09d2-4b19-8b4f-20374135a082 0xc002cfc0b7 0xc002cfc0b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-66tjg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-66tjg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-66tjg true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002cfc140} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002cfc160}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:20:15 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug  5 14:20:15.711: INFO: Pod "nginx-deployment-55fb7cb77f-6gldx" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-6gldx,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4843,SelfLink:/api/v1/namespaces/deployment-4843/pods/nginx-deployment-55fb7cb77f-6gldx,UID:9410233e-89a3-4a80-a6fe-2cf78b668b1e,ResourceVersion:3106536,Generation:0,CreationTimestamp:2020-08-05 14:20:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f c4f33d86-09d2-4b19-8b4f-20374135a082 0xc002cfc1e7 0xc002cfc1e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-66tjg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-66tjg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-66tjg true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002cfc270} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002cfc290}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:20:15 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug  5 14:20:15.711: INFO: Pod "nginx-deployment-55fb7cb77f-c7bxb" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-c7bxb,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4843,SelfLink:/api/v1/namespaces/deployment-4843/pods/nginx-deployment-55fb7cb77f-c7bxb,UID:804e66d6-1ef1-4dcd-bf09-701a2784ecad,ResourceVersion:3106547,Generation:0,CreationTimestamp:2020-08-05 14:20:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f c4f33d86-09d2-4b19-8b4f-20374135a082 0xc002cfc317 0xc002cfc318}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-66tjg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-66tjg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-66tjg true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002cfc390} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002cfc3b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:20:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:20:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:20:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:20:15 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.7,PodIP:,StartTime:2020-08-05 14:20:15 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug  5 14:20:15.711: INFO: Pod "nginx-deployment-55fb7cb77f-cs7tz" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-cs7tz,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4843,SelfLink:/api/v1/namespaces/deployment-4843/pods/nginx-deployment-55fb7cb77f-cs7tz,UID:f0b74653-72f6-4c58-8cc4-954140c916ce,ResourceVersion:3106477,Generation:0,CreationTimestamp:2020-08-05 14:20:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f c4f33d86-09d2-4b19-8b4f-20374135a082 0xc002cfc480 0xc002cfc481}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-66tjg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-66tjg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-66tjg true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002cfc500} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002cfc520}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:20:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:20:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:20:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:20:13 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-08-05 14:20:13 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug  5 14:20:15.711: INFO: Pod "nginx-deployment-55fb7cb77f-fxg6c" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-fxg6c,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4843,SelfLink:/api/v1/namespaces/deployment-4843/pods/nginx-deployment-55fb7cb77f-fxg6c,UID:bd346889-bc5c-40f7-b4c0-08c34cd405e9,ResourceVersion:3106520,Generation:0,CreationTimestamp:2020-08-05 14:20:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f c4f33d86-09d2-4b19-8b4f-20374135a082 0xc002cfc5f0 0xc002cfc5f1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-66tjg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-66tjg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-66tjg true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002cfc670} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002cfc690}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:20:15 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug  5 14:20:15.712: INFO: Pod "nginx-deployment-55fb7cb77f-jw6fc" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-jw6fc,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4843,SelfLink:/api/v1/namespaces/deployment-4843/pods/nginx-deployment-55fb7cb77f-jw6fc,UID:53de5f5d-c4ab-4b62-acb2-56c506f7016f,ResourceVersion:3106527,Generation:0,CreationTimestamp:2020-08-05 14:20:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f c4f33d86-09d2-4b19-8b4f-20374135a082 0xc002cfc717 0xc002cfc718}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-66tjg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-66tjg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-66tjg true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002cfc790} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002cfc7b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:20:15 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug  5 14:20:15.712: INFO: Pod "nginx-deployment-55fb7cb77f-k4vrh" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-k4vrh,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4843,SelfLink:/api/v1/namespaces/deployment-4843/pods/nginx-deployment-55fb7cb77f-k4vrh,UID:d00ac075-d8ea-4416-bf29-36b2c754db49,ResourceVersion:3106461,Generation:0,CreationTimestamp:2020-08-05 14:20:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f c4f33d86-09d2-4b19-8b4f-20374135a082 0xc002cfc837 0xc002cfc838}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-66tjg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-66tjg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-66tjg true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002cfc8b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002cfc8d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:20:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:20:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:20:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:20:13 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.7,PodIP:,StartTime:2020-08-05 14:20:13 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug  5 14:20:15.712: INFO: Pod "nginx-deployment-55fb7cb77f-lncxz" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-lncxz,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4843,SelfLink:/api/v1/namespaces/deployment-4843/pods/nginx-deployment-55fb7cb77f-lncxz,UID:32dc65fa-f014-4e57-a83f-aca599427f02,ResourceVersion:3106454,Generation:0,CreationTimestamp:2020-08-05 14:20:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f c4f33d86-09d2-4b19-8b4f-20374135a082 0xc002cfc9a0 0xc002cfc9a1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-66tjg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-66tjg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-66tjg true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002cfca20} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002cfca40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:20:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:20:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:20:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:20:13 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-08-05 14:20:13 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug  5 14:20:15.712: INFO: Pod "nginx-deployment-55fb7cb77f-mh526" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-mh526,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4843,SelfLink:/api/v1/namespaces/deployment-4843/pods/nginx-deployment-55fb7cb77f-mh526,UID:40d2f573-fb94-475d-9fa2-c46ef4c03847,ResourceVersion:3106462,Generation:0,CreationTimestamp:2020-08-05 14:20:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f c4f33d86-09d2-4b19-8b4f-20374135a082 0xc002cfcb10 0xc002cfcb11}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-66tjg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-66tjg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-66tjg true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002cfcb90} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002cfcbb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:20:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:20:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:20:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:20:13 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-08-05 14:20:13 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug  5 14:20:15.712: INFO: Pod "nginx-deployment-55fb7cb77f-sb6vn" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-sb6vn,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4843,SelfLink:/api/v1/namespaces/deployment-4843/pods/nginx-deployment-55fb7cb77f-sb6vn,UID:4bfa889e-f20d-480c-bf66-b7e17c2d5186,ResourceVersion:3106476,Generation:0,CreationTimestamp:2020-08-05 14:20:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f c4f33d86-09d2-4b19-8b4f-20374135a082 0xc002cfcc80 0xc002cfcc81}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-66tjg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-66tjg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-66tjg true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002cfcd00} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002cfcd20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:20:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:20:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:20:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:20:13 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.7,PodIP:,StartTime:2020-08-05 14:20:13 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug  5 14:20:15.716: INFO: Pod "nginx-deployment-55fb7cb77f-sxrtb" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-sxrtb,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4843,SelfLink:/api/v1/namespaces/deployment-4843/pods/nginx-deployment-55fb7cb77f-sxrtb,UID:89b8e7c3-89af-43a0-bf92-2f45031a7a3b,ResourceVersion:3106529,Generation:0,CreationTimestamp:2020-08-05 14:20:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f c4f33d86-09d2-4b19-8b4f-20374135a082 0xc002cfcdf0 0xc002cfcdf1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-66tjg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-66tjg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-66tjg true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002cfce70} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002cfce90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:20:15 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug  5 14:20:15.716: INFO: Pod "nginx-deployment-55fb7cb77f-t4pqx" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-t4pqx,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4843,SelfLink:/api/v1/namespaces/deployment-4843/pods/nginx-deployment-55fb7cb77f-t4pqx,UID:c54b6ff1-aa35-4fa2-9aa6-c87b5b8e331a,ResourceVersion:3106514,Generation:0,CreationTimestamp:2020-08-05 14:20:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f c4f33d86-09d2-4b19-8b4f-20374135a082 0xc002cfcf17 0xc002cfcf18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-66tjg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-66tjg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-66tjg true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002cfcf90} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002cfcfb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:20:15 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug  5 14:20:15.717: INFO: Pod "nginx-deployment-55fb7cb77f-wf2qw" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-wf2qw,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4843,SelfLink:/api/v1/namespaces/deployment-4843/pods/nginx-deployment-55fb7cb77f-wf2qw,UID:b837aec2-1b8b-473e-aa38-3cbbbe4dc027,ResourceVersion:3106528,Generation:0,CreationTimestamp:2020-08-05 14:20:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f c4f33d86-09d2-4b19-8b4f-20374135a082 0xc002cfd037 0xc002cfd038}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-66tjg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-66tjg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-66tjg true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002cfd0b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002cfd0d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:20:15 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug  5 14:20:15.717: INFO: Pod "nginx-deployment-7b8c6f4498-2gjhz" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-2gjhz,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4843,SelfLink:/api/v1/namespaces/deployment-4843/pods/nginx-deployment-7b8c6f4498-2gjhz,UID:ab1f933a-5d7f-43f1-8bc4-6c02600ca0a8,ResourceVersion:3106535,Generation:0,CreationTimestamp:2020-08-05 14:20:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 05ffa777-7d74-4db1-b25a-3374dfeddd43 0xc002cfd157 0xc002cfd158}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-66tjg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-66tjg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-66tjg true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002cfd1d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002cfd1f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:20:15 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug  5 14:20:15.717: INFO: Pod "nginx-deployment-7b8c6f4498-2l9v9" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-2l9v9,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4843,SelfLink:/api/v1/namespaces/deployment-4843/pods/nginx-deployment-7b8c6f4498-2l9v9,UID:1adaa73e-0b16-43ed-b750-e29bb6a8b29b,ResourceVersion:3106510,Generation:0,CreationTimestamp:2020-08-05 14:20:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 05ffa777-7d74-4db1-b25a-3374dfeddd43 0xc002cfd277 0xc002cfd278}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-66tjg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-66tjg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-66tjg true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002cfd2f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002cfd310}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:20:15 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug  5 14:20:15.717: INFO: Pod "nginx-deployment-7b8c6f4498-5fmqh" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-5fmqh,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4843,SelfLink:/api/v1/namespaces/deployment-4843/pods/nginx-deployment-7b8c6f4498-5fmqh,UID:81bdea0f-4d0c-40ab-b556-e6712b8ff8ad,ResourceVersion:3106532,Generation:0,CreationTimestamp:2020-08-05 14:20:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 05ffa777-7d74-4db1-b25a-3374dfeddd43 0xc002cfd397 0xc002cfd398}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-66tjg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-66tjg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-66tjg true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002cfd410} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002cfd430}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:20:15 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug  5 14:20:15.718: INFO: Pod "nginx-deployment-7b8c6f4498-5wswc" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-5wswc,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4843,SelfLink:/api/v1/namespaces/deployment-4843/pods/nginx-deployment-7b8c6f4498-5wswc,UID:a2948468-c128-4fb6-813e-eb22c02e5cbf,ResourceVersion:3106418,Generation:0,CreationTimestamp:2020-08-05 14:20:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 05ffa777-7d74-4db1-b25a-3374dfeddd43 0xc002cfd4b7 0xc002cfd4b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-66tjg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-66tjg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-66tjg true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002cfd530} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002cfd550}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:20:01 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:20:11 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:20:11 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:20:01 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.1.160,StartTime:2020-08-05 14:20:01 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-05 14:20:10 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://381cf792657f4a4a2b503a63979149d7c7751f7ce1e194f23e66f006a8bfa4e4}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug  5 14:20:15.718: INFO: Pod "nginx-deployment-7b8c6f4498-6fgzq" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-6fgzq,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4843,SelfLink:/api/v1/namespaces/deployment-4843/pods/nginx-deployment-7b8c6f4498-6fgzq,UID:fb7e143e-e13a-4229-8114-90bcdae56001,ResourceVersion:3106500,Generation:0,CreationTimestamp:2020-08-05 14:20:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 05ffa777-7d74-4db1-b25a-3374dfeddd43 0xc002cfd627 0xc002cfd628}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-66tjg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-66tjg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-66tjg true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002cfd6a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002cfd6d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:20:15 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug  5 14:20:15.718: INFO: Pod "nginx-deployment-7b8c6f4498-9qfww" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-9qfww,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4843,SelfLink:/api/v1/namespaces/deployment-4843/pods/nginx-deployment-7b8c6f4498-9qfww,UID:faaf52f8-2433-4478-9b86-159309d4e24d,ResourceVersion:3106376,Generation:0,CreationTimestamp:2020-08-05 14:20:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 05ffa777-7d74-4db1-b25a-3374dfeddd43 0xc002cfd757 0xc002cfd758}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-66tjg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-66tjg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-66tjg true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002cfd7d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002cfd7f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:20:01 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:20:09 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:20:09 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:20:01 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.7,PodIP:10.244.2.184,StartTime:2020-08-05 14:20:01 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-05 14:20:08 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://930b3f8902299cec97f1d1943577059fefc00c1aa38add07fa8f39bdf24a5a90}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug  5 14:20:15.718: INFO: Pod "nginx-deployment-7b8c6f4498-fxsl6" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-fxsl6,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4843,SelfLink:/api/v1/namespaces/deployment-4843/pods/nginx-deployment-7b8c6f4498-fxsl6,UID:db13f83a-96c0-44f9-b966-477328f932a4,ResourceVersion:3106531,Generation:0,CreationTimestamp:2020-08-05 14:20:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 05ffa777-7d74-4db1-b25a-3374dfeddd43 0xc002cfd8c7 0xc002cfd8c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-66tjg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-66tjg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-66tjg true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002cfd940} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002cfd960}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:20:15 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug  5 14:20:15.718: INFO: Pod "nginx-deployment-7b8c6f4498-g44w6" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-g44w6,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4843,SelfLink:/api/v1/namespaces/deployment-4843/pods/nginx-deployment-7b8c6f4498-g44w6,UID:afd0f839-30f7-4b75-8cea-b1f6958634c4,ResourceVersion:3106403,Generation:0,CreationTimestamp:2020-08-05 14:20:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 05ffa777-7d74-4db1-b25a-3374dfeddd43 0xc002cfd9e7 0xc002cfd9e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-66tjg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-66tjg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-66tjg true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002cfda60} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002cfda80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:20:01 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:20:11 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:20:11 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:20:01 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.7,PodIP:10.244.2.185,StartTime:2020-08-05 14:20:01 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-05 14:20:10 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://34cd5616e1d2acbd086fca1eeb9342bb24c5d79e2acbbafeca737ff543b84bd1}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug  5 14:20:15.719: INFO: Pod "nginx-deployment-7b8c6f4498-ggn4g" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-ggn4g,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4843,SelfLink:/api/v1/namespaces/deployment-4843/pods/nginx-deployment-7b8c6f4498-ggn4g,UID:3ad65709-0327-4933-9db1-9e014ecd3865,ResourceVersion:3106522,Generation:0,CreationTimestamp:2020-08-05 14:20:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 05ffa777-7d74-4db1-b25a-3374dfeddd43 0xc002cfdb57 0xc002cfdb58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-66tjg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-66tjg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-66tjg true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002cfdbd0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002cfdbf0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:20:15 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug  5 14:20:15.719: INFO: Pod "nginx-deployment-7b8c6f4498-h96wf" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-h96wf,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4843,SelfLink:/api/v1/namespaces/deployment-4843/pods/nginx-deployment-7b8c6f4498-h96wf,UID:e583fb09-5f57-4030-8fae-56bd520518ad,ResourceVersion:3106410,Generation:0,CreationTimestamp:2020-08-05 14:20:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 05ffa777-7d74-4db1-b25a-3374dfeddd43 0xc002cfdc77 0xc002cfdc78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-66tjg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-66tjg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-66tjg true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002cfdd00} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002cfdd20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:20:01 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:20:11 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:20:11 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:20:01 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.1.158,StartTime:2020-08-05 14:20:01 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-05 14:20:10 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://67291cdbd9493b21a3e1850c6187b26dea1098e34dee2a089a367ff32b2bf5b6}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug  5 14:20:15.719: INFO: Pod "nginx-deployment-7b8c6f4498-hp67p" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-hp67p,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4843,SelfLink:/api/v1/namespaces/deployment-4843/pods/nginx-deployment-7b8c6f4498-hp67p,UID:6bfc24b6-bb15-4f4b-8e64-8a00ca0db85e,ResourceVersion:3106414,Generation:0,CreationTimestamp:2020-08-05 14:20:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 05ffa777-7d74-4db1-b25a-3374dfeddd43 0xc002cfddf7 0xc002cfddf8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-66tjg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-66tjg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-66tjg true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002cfde70} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002cfde90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:20:01 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:20:11 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:20:11 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:20:01 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.1.159,StartTime:2020-08-05 14:20:01 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-05 14:20:10 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://06bbbd258ee6e1a3aaaa3ab26cd58ade455645c547c76efbbd84ed9d6a385365}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug  5 14:20:15.719: INFO: Pod "nginx-deployment-7b8c6f4498-lfj7t" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-lfj7t,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4843,SelfLink:/api/v1/namespaces/deployment-4843/pods/nginx-deployment-7b8c6f4498-lfj7t,UID:fe6692b7-68b3-4c46-a941-8bae1666d4a8,ResourceVersion:3106519,Generation:0,CreationTimestamp:2020-08-05 14:20:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 05ffa777-7d74-4db1-b25a-3374dfeddd43 0xc002cfdf67 0xc002cfdf68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-66tjg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-66tjg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-66tjg true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002cfdfe0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0031d2000}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:20:15 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug  5 14:20:15.719: INFO: Pod "nginx-deployment-7b8c6f4498-pgsxw" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-pgsxw,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4843,SelfLink:/api/v1/namespaces/deployment-4843/pods/nginx-deployment-7b8c6f4498-pgsxw,UID:b0036e3b-891b-460b-866e-13f790593da0,ResourceVersion:3106534,Generation:0,CreationTimestamp:2020-08-05 14:20:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 05ffa777-7d74-4db1-b25a-3374dfeddd43 0xc0031d2087 0xc0031d2088}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-66tjg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-66tjg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-66tjg true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0031d2100} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0031d2150}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:20:15 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug  5 14:20:15.720: INFO: Pod "nginx-deployment-7b8c6f4498-pwhqq" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-pwhqq,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4843,SelfLink:/api/v1/namespaces/deployment-4843/pods/nginx-deployment-7b8c6f4498-pwhqq,UID:6f6b7154-422b-4a8c-908e-7290a9983122,ResourceVersion:3106530,Generation:0,CreationTimestamp:2020-08-05 14:20:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 05ffa777-7d74-4db1-b25a-3374dfeddd43 0xc0031d21d7 0xc0031d21d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-66tjg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-66tjg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-66tjg true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0031d2260} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0031d2280}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:20:15 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug  5 14:20:15.720: INFO: Pod "nginx-deployment-7b8c6f4498-q7kpf" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-q7kpf,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4843,SelfLink:/api/v1/namespaces/deployment-4843/pods/nginx-deployment-7b8c6f4498-q7kpf,UID:d73211f3-b404-4053-a37b-dd86769264a4,ResourceVersion:3106397,Generation:0,CreationTimestamp:2020-08-05 14:20:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 05ffa777-7d74-4db1-b25a-3374dfeddd43 0xc0031d2317 0xc0031d2318}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-66tjg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-66tjg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-66tjg true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0031d2390} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0031d23b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:20:01 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:20:11 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:20:11 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:20:01 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.7,PodIP:10.244.2.186,StartTime:2020-08-05 14:20:01 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-05 14:20:10 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://220c68fd4ad4f8f1f0da6dc757c253d2c0ade014c9469a263034557e8c7323cf}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug  5 14:20:15.720: INFO: Pod "nginx-deployment-7b8c6f4498-r4fb7" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-r4fb7,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4843,SelfLink:/api/v1/namespaces/deployment-4843/pods/nginx-deployment-7b8c6f4498-r4fb7,UID:210b2915-e14f-4fb8-a85a-e114cd7ec3a1,ResourceVersion:3106357,Generation:0,CreationTimestamp:2020-08-05 14:20:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 05ffa777-7d74-4db1-b25a-3374dfeddd43 0xc0031d2487 0xc0031d2488}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-66tjg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-66tjg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-66tjg true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0031d2500} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0031d2520}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:20:01 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:20:05 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:20:05 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:20:01 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.1.157,StartTime:2020-08-05 14:20:01 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-05 14:20:05 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://63c32ca514d7329950ddaabc6178e1e9874e6a51b4353cd815f81e5efa992c70}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug  5 14:20:15.720: INFO: Pod "nginx-deployment-7b8c6f4498-rw7l6" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-rw7l6,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4843,SelfLink:/api/v1/namespaces/deployment-4843/pods/nginx-deployment-7b8c6f4498-rw7l6,UID:d8d853e6-e177-4403-b5f2-4aab3203f78e,ResourceVersion:3106545,Generation:0,CreationTimestamp:2020-08-05 14:20:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 05ffa777-7d74-4db1-b25a-3374dfeddd43 0xc0031d25f7 0xc0031d25f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-66tjg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-66tjg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-66tjg true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0031d2670} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0031d2690}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:20:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:20:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:20:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:20:15 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-08-05 14:20:15 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug  5 14:20:15.720: INFO: Pod "nginx-deployment-7b8c6f4498-w94sj" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-w94sj,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4843,SelfLink:/api/v1/namespaces/deployment-4843/pods/nginx-deployment-7b8c6f4498-w94sj,UID:36252350-ccea-41a0-bf0c-cf00da10f4ed,ResourceVersion:3106521,Generation:0,CreationTimestamp:2020-08-05 14:20:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 05ffa777-7d74-4db1-b25a-3374dfeddd43 0xc0031d2757 0xc0031d2758}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-66tjg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-66tjg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-66tjg true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0031d27d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0031d27f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:20:15 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug  5 14:20:15.720: INFO: Pod "nginx-deployment-7b8c6f4498-wpjzg" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-wpjzg,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4843,SelfLink:/api/v1/namespaces/deployment-4843/pods/nginx-deployment-7b8c6f4498-wpjzg,UID:4353b206-a78a-4a72-a72d-83841fd53299,ResourceVersion:3106533,Generation:0,CreationTimestamp:2020-08-05 14:20:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 05ffa777-7d74-4db1-b25a-3374dfeddd43 0xc0031d2877 0xc0031d2878}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-66tjg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-66tjg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-66tjg true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0031d2900} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0031d2920}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:20:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:20:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:20:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:20:15 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.7,PodIP:,StartTime:2020-08-05 14:20:15 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug  5 14:20:15.721: INFO: Pod "nginx-deployment-7b8c6f4498-znqr5" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-znqr5,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4843,SelfLink:/api/v1/namespaces/deployment-4843/pods/nginx-deployment-7b8c6f4498-znqr5,UID:090ef342-b3a9-4a4e-a773-95637fa49c0f,ResourceVersion:3106399,Generation:0,CreationTimestamp:2020-08-05 14:20:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 05ffa777-7d74-4db1-b25a-3374dfeddd43 0xc0031d29e7 0xc0031d29e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-66tjg {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-66tjg,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-66tjg true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0031d2a60} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0031d2a80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:20:01 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:20:11 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:20:11 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:20:01 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.7,PodIP:10.244.2.188,StartTime:2020-08-05 14:20:01 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-05 14:20:10 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://7bf5e6a186534dd1a64963ee8a074f66750d788fff3386e3ab52619f6d2728b0}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 14:20:15.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-4843" for this suite.
Aug  5 14:20:42.059: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 14:20:42.129: INFO: namespace deployment-4843 deletion completed in 26.366033183s

• [SLOW TEST:41.112 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 14:20:42.129: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Aug  5 14:20:42.196: INFO: Waiting up to 5m0s for pod "pod-e924de15-b33c-4271-b949-8ce92d7f3d19" in namespace "emptydir-7154" to be "success or failure"
Aug  5 14:20:42.200: INFO: Pod "pod-e924de15-b33c-4271-b949-8ce92d7f3d19": Phase="Pending", Reason="", readiness=false. Elapsed: 3.393694ms
Aug  5 14:20:44.204: INFO: Pod "pod-e924de15-b33c-4271-b949-8ce92d7f3d19": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007804551s
Aug  5 14:20:46.209: INFO: Pod "pod-e924de15-b33c-4271-b949-8ce92d7f3d19": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012457136s
STEP: Saw pod success
Aug  5 14:20:46.209: INFO: Pod "pod-e924de15-b33c-4271-b949-8ce92d7f3d19" satisfied condition "success or failure"
Aug  5 14:20:46.212: INFO: Trying to get logs from node iruya-worker2 pod pod-e924de15-b33c-4271-b949-8ce92d7f3d19 container test-container: 
STEP: delete the pod
Aug  5 14:20:46.232: INFO: Waiting for pod pod-e924de15-b33c-4271-b949-8ce92d7f3d19 to disappear
Aug  5 14:20:46.242: INFO: Pod pod-e924de15-b33c-4271-b949-8ce92d7f3d19 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 14:20:46.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7154" for this suite.
Aug  5 14:20:52.258: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 14:20:52.329: INFO: namespace emptydir-7154 deletion completed in 6.083319657s

• [SLOW TEST:10.200 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 14:20:52.329: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-eab4aa00-19a1-435e-9e1b-9a7f63f850e9
STEP: Creating a pod to test consume configMaps
Aug  5 14:20:52.395: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b1c27314-b69c-423e-b125-3e132b60319f" in namespace "projected-1996" to be "success or failure"
Aug  5 14:20:52.446: INFO: Pod "pod-projected-configmaps-b1c27314-b69c-423e-b125-3e132b60319f": Phase="Pending", Reason="", readiness=false. Elapsed: 50.822993ms
Aug  5 14:20:54.450: INFO: Pod "pod-projected-configmaps-b1c27314-b69c-423e-b125-3e132b60319f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055217024s
Aug  5 14:20:56.455: INFO: Pod "pod-projected-configmaps-b1c27314-b69c-423e-b125-3e132b60319f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.060792714s
STEP: Saw pod success
Aug  5 14:20:56.456: INFO: Pod "pod-projected-configmaps-b1c27314-b69c-423e-b125-3e132b60319f" satisfied condition "success or failure"
Aug  5 14:20:56.459: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-b1c27314-b69c-423e-b125-3e132b60319f container projected-configmap-volume-test: 
STEP: delete the pod
Aug  5 14:20:56.510: INFO: Waiting for pod pod-projected-configmaps-b1c27314-b69c-423e-b125-3e132b60319f to disappear
Aug  5 14:20:56.512: INFO: Pod pod-projected-configmaps-b1c27314-b69c-423e-b125-3e132b60319f no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 14:20:56.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1996" for this suite.
Aug  5 14:21:02.580: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 14:21:02.673: INFO: namespace projected-1996 deletion completed in 6.15773386s

• [SLOW TEST:10.344 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 14:21:02.673: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Aug  5 14:21:02.820: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug  5 14:21:02.827: INFO: Waiting for terminating namespaces to be deleted...
Aug  5 14:21:02.829: INFO: 
Logging pods the kubelet thinks is on node iruya-worker before test
Aug  5 14:21:02.834: INFO: kindnet-k7tjm from kube-system started at 2020-07-19 21:16:08 +0000 UTC (1 container statuses recorded)
Aug  5 14:21:02.834: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug  5 14:21:02.834: INFO: kube-proxy-jzrnl from kube-system started at 2020-07-19 21:16:08 +0000 UTC (1 container statuses recorded)
Aug  5 14:21:02.834: INFO: 	Container kube-proxy ready: true, restart count 0
Aug  5 14:21:02.834: INFO: 
Logging pods the kubelet thinks is on node iruya-worker2 before test
Aug  5 14:21:02.845: INFO: kindnet-8kg9z from kube-system started at 2020-07-19 21:16:09 +0000 UTC (1 container statuses recorded)
Aug  5 14:21:02.845: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug  5 14:21:02.845: INFO: kube-proxy-9ktgx from kube-system started at 2020-07-19 21:16:10 +0000 UTC (1 container statuses recorded)
Aug  5 14:21:02.845: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-138c2e3f-10c1-426d-a86d-d432da522aed 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-138c2e3f-10c1-426d-a86d-d432da522aed off the node iruya-worker2
STEP: verifying the node doesn't have the label kubernetes.io/e2e-138c2e3f-10c1-426d-a86d-d432da522aed
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 14:21:11.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-4363" for this suite.
Aug  5 14:21:21.019: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 14:21:21.096: INFO: namespace sched-pred-4363 deletion completed in 10.090044224s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:18.423 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 14:21:21.096: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Aug  5 14:21:21.153: INFO: Waiting up to 5m0s for pod "pod-8a257710-1f15-42f3-be6f-69183dfc8453" in namespace "emptydir-8169" to be "success or failure"
Aug  5 14:21:21.165: INFO: Pod "pod-8a257710-1f15-42f3-be6f-69183dfc8453": Phase="Pending", Reason="", readiness=false. Elapsed: 11.305472ms
Aug  5 14:21:23.169: INFO: Pod "pod-8a257710-1f15-42f3-be6f-69183dfc8453": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01591817s
Aug  5 14:21:25.174: INFO: Pod "pod-8a257710-1f15-42f3-be6f-69183dfc8453": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020489361s
STEP: Saw pod success
Aug  5 14:21:25.174: INFO: Pod "pod-8a257710-1f15-42f3-be6f-69183dfc8453" satisfied condition "success or failure"
Aug  5 14:21:25.177: INFO: Trying to get logs from node iruya-worker pod pod-8a257710-1f15-42f3-be6f-69183dfc8453 container test-container: 
STEP: delete the pod
Aug  5 14:21:25.198: INFO: Waiting for pod pod-8a257710-1f15-42f3-be6f-69183dfc8453 to disappear
Aug  5 14:21:25.201: INFO: Pod pod-8a257710-1f15-42f3-be6f-69183dfc8453 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 14:21:25.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8169" for this suite.
Aug  5 14:21:31.216: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 14:21:31.289: INFO: namespace emptydir-8169 deletion completed in 6.083660563s

• [SLOW TEST:10.193 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 14:21:31.290: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-8188
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Aug  5 14:21:31.363: INFO: Found 0 stateful pods, waiting for 3
Aug  5 14:21:41.367: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug  5 14:21:41.367: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug  5 14:21:41.367: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Aug  5 14:21:51.368: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug  5 14:21:51.368: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug  5 14:21:51.368: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Aug  5 14:21:51.377: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8188 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Aug  5 14:21:51.606: INFO: stderr: "I0805 14:21:51.503256    2415 log.go:172] (0xc000b14420) (0xc0006d86e0) Create stream\nI0805 14:21:51.503308    2415 log.go:172] (0xc000b14420) (0xc0006d86e0) Stream added, broadcasting: 1\nI0805 14:21:51.507370    2415 log.go:172] (0xc000b14420) Reply frame received for 1\nI0805 14:21:51.507425    2415 log.go:172] (0xc000b14420) (0xc0006d8000) Create stream\nI0805 14:21:51.507436    2415 log.go:172] (0xc000b14420) (0xc0006d8000) Stream added, broadcasting: 3\nI0805 14:21:51.508314    2415 log.go:172] (0xc000b14420) Reply frame received for 3\nI0805 14:21:51.508376    2415 log.go:172] (0xc000b14420) (0xc0003f0140) Create stream\nI0805 14:21:51.508407    2415 log.go:172] (0xc000b14420) (0xc0003f0140) Stream added, broadcasting: 5\nI0805 14:21:51.509511    2415 log.go:172] (0xc000b14420) Reply frame received for 5\nI0805 14:21:51.572001    2415 log.go:172] (0xc000b14420) Data frame received for 5\nI0805 14:21:51.572047    2415 log.go:172] (0xc0003f0140) (5) Data frame handling\nI0805 14:21:51.572079    2415 log.go:172] (0xc0003f0140) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0805 14:21:51.597300    2415 log.go:172] (0xc000b14420) Data frame received for 3\nI0805 14:21:51.597336    2415 log.go:172] (0xc0006d8000) (3) Data frame handling\nI0805 14:21:51.597371    2415 log.go:172] (0xc0006d8000) (3) Data frame sent\nI0805 14:21:51.597609    2415 log.go:172] (0xc000b14420) Data frame received for 3\nI0805 14:21:51.597623    2415 log.go:172] (0xc0006d8000) (3) Data frame handling\nI0805 14:21:51.597808    2415 log.go:172] (0xc000b14420) Data frame received for 5\nI0805 14:21:51.597824    2415 log.go:172] (0xc0003f0140) (5) Data frame handling\nI0805 14:21:51.599769    2415 log.go:172] (0xc000b14420) Data frame received for 1\nI0805 14:21:51.599787    2415 log.go:172] (0xc0006d86e0) (1) Data frame handling\nI0805 14:21:51.599800    2415 log.go:172] (0xc0006d86e0) (1) Data frame sent\nI0805 14:21:51.599814    2415 log.go:172] (0xc000b14420) (0xc0006d86e0) Stream removed, broadcasting: 1\nI0805 14:21:51.599830    2415 log.go:172] (0xc000b14420) Go away received\nI0805 14:21:51.600361    2415 log.go:172] (0xc000b14420) (0xc0006d86e0) Stream removed, broadcasting: 1\nI0805 14:21:51.600387    2415 log.go:172] (0xc000b14420) (0xc0006d8000) Stream removed, broadcasting: 3\nI0805 14:21:51.600399    2415 log.go:172] (0xc000b14420) (0xc0003f0140) Stream removed, broadcasting: 5\n"
Aug  5 14:21:51.606: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Aug  5 14:21:51.606: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Aug  5 14:22:01.652: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Aug  5 14:22:11.710: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8188 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug  5 14:22:11.937: INFO: stderr: "I0805 14:22:11.844639    2435 log.go:172] (0xc00091c420) (0xc000634820) Create stream\nI0805 14:22:11.844708    2435 log.go:172] (0xc00091c420) (0xc000634820) Stream added, broadcasting: 1\nI0805 14:22:11.848837    2435 log.go:172] (0xc00091c420) Reply frame received for 1\nI0805 14:22:11.848883    2435 log.go:172] (0xc00091c420) (0xc0002a4140) Create stream\nI0805 14:22:11.848896    2435 log.go:172] (0xc00091c420) (0xc0002a4140) Stream added, broadcasting: 3\nI0805 14:22:11.849608    2435 log.go:172] (0xc00091c420) Reply frame received for 3\nI0805 14:22:11.849640    2435 log.go:172] (0xc00091c420) (0xc000634000) Create stream\nI0805 14:22:11.849654    2435 log.go:172] (0xc00091c420) (0xc000634000) Stream added, broadcasting: 5\nI0805 14:22:11.850622    2435 log.go:172] (0xc00091c420) Reply frame received for 5\nI0805 14:22:11.927323    2435 log.go:172] (0xc00091c420) Data frame received for 3\nI0805 14:22:11.927354    2435 log.go:172] (0xc0002a4140) (3) Data frame handling\nI0805 14:22:11.927367    2435 log.go:172] (0xc0002a4140) (3) Data frame sent\nI0805 14:22:11.927376    2435 log.go:172] (0xc00091c420) Data frame received for 3\nI0805 14:22:11.927385    2435 log.go:172] (0xc0002a4140) (3) Data frame handling\nI0805 14:22:11.927460    2435 log.go:172] (0xc00091c420) Data frame received for 5\nI0805 14:22:11.927501    2435 log.go:172] (0xc000634000) (5) Data frame handling\nI0805 14:22:11.927526    2435 log.go:172] (0xc000634000) (5) Data frame sent\nI0805 14:22:11.927548    2435 log.go:172] (0xc00091c420) Data frame received for 5\nI0805 14:22:11.927561    2435 log.go:172] (0xc000634000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0805 14:22:11.929225    2435 log.go:172] (0xc00091c420) Data frame received for 1\nI0805 14:22:11.929261    2435 log.go:172] (0xc000634820) (1) Data frame handling\nI0805 14:22:11.929294    2435 log.go:172] (0xc000634820) (1) Data frame sent\nI0805 14:22:11.929325    2435 log.go:172] (0xc00091c420) (0xc000634820) Stream removed, broadcasting: 1\nI0805 14:22:11.929362    2435 log.go:172] (0xc00091c420) Go away received\nI0805 14:22:11.929762    2435 log.go:172] (0xc00091c420) (0xc000634820) Stream removed, broadcasting: 1\nI0805 14:22:11.929790    2435 log.go:172] (0xc00091c420) (0xc0002a4140) Stream removed, broadcasting: 3\nI0805 14:22:11.929805    2435 log.go:172] (0xc00091c420) (0xc000634000) Stream removed, broadcasting: 5\n"
Aug  5 14:22:11.937: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Aug  5 14:22:11.937: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Aug  5 14:22:31.957: INFO: Waiting for StatefulSet statefulset-8188/ss2 to complete update
Aug  5 14:22:31.957: INFO: Waiting for Pod statefulset-8188/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Rolling back to a previous revision
Aug  5 14:22:41.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8188 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Aug  5 14:22:44.788: INFO: stderr: "I0805 14:22:44.657951    2455 log.go:172] (0xc000116f20) (0xc0006926e0) Create stream\nI0805 14:22:44.657988    2455 log.go:172] (0xc000116f20) (0xc0006926e0) Stream added, broadcasting: 1\nI0805 14:22:44.660307    2455 log.go:172] (0xc000116f20) Reply frame received for 1\nI0805 14:22:44.660362    2455 log.go:172] (0xc000116f20) (0xc000700000) Create stream\nI0805 14:22:44.660377    2455 log.go:172] (0xc000116f20) (0xc000700000) Stream added, broadcasting: 3\nI0805 14:22:44.661381    2455 log.go:172] (0xc000116f20) Reply frame received for 3\nI0805 14:22:44.661418    2455 log.go:172] (0xc000116f20) (0xc0007e8000) Create stream\nI0805 14:22:44.661430    2455 log.go:172] (0xc000116f20) (0xc0007e8000) Stream added, broadcasting: 5\nI0805 14:22:44.662287    2455 log.go:172] (0xc000116f20) Reply frame received for 5\nI0805 14:22:44.739712    2455 log.go:172] (0xc000116f20) Data frame received for 5\nI0805 14:22:44.739748    2455 log.go:172] (0xc0007e8000) (5) Data frame handling\nI0805 14:22:44.739764    2455 log.go:172] (0xc0007e8000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0805 14:22:44.776571    2455 log.go:172] (0xc000116f20) Data frame received for 3\nI0805 14:22:44.776608    2455 log.go:172] (0xc000700000) (3) Data frame handling\nI0805 14:22:44.776634    2455 log.go:172] (0xc000700000) (3) Data frame sent\nI0805 14:22:44.777222    2455 log.go:172] (0xc000116f20) Data frame received for 3\nI0805 14:22:44.777253    2455 log.go:172] (0xc000116f20) Data frame received for 5\nI0805 14:22:44.777287    2455 log.go:172] (0xc0007e8000) (5) Data frame handling\nI0805 14:22:44.777337    2455 log.go:172] (0xc000700000) (3) Data frame handling\nI0805 14:22:44.779062    2455 log.go:172] (0xc000116f20) Data frame received for 1\nI0805 14:22:44.779101    2455 log.go:172] (0xc0006926e0) (1) Data frame handling\nI0805 14:22:44.779137    2455 log.go:172] (0xc0006926e0) (1) Data frame sent\nI0805 14:22:44.779179    2455 log.go:172] (0xc000116f20) (0xc0006926e0) Stream removed, broadcasting: 1\nI0805 14:22:44.779231    2455 log.go:172] (0xc000116f20) Go away received\nI0805 14:22:44.779671    2455 log.go:172] (0xc000116f20) (0xc0006926e0) Stream removed, broadcasting: 1\nI0805 14:22:44.779697    2455 log.go:172] (0xc000116f20) (0xc000700000) Stream removed, broadcasting: 3\nI0805 14:22:44.779708    2455 log.go:172] (0xc000116f20) (0xc0007e8000) Stream removed, broadcasting: 5\n"
Aug  5 14:22:44.788: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Aug  5 14:22:44.788: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Aug  5 14:22:54.818: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Aug  5 14:23:04.874: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8188 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug  5 14:23:05.100: INFO: stderr: "I0805 14:23:05.001184    2489 log.go:172] (0xc000720a50) (0xc0009fe640) Create stream\nI0805 14:23:05.001235    2489 log.go:172] (0xc000720a50) (0xc0009fe640) Stream added, broadcasting: 1\nI0805 14:23:05.003431    2489 log.go:172] (0xc000720a50) Reply frame received for 1\nI0805 14:23:05.003480    2489 log.go:172] (0xc000720a50) (0xc0008d2000) Create stream\nI0805 14:23:05.003497    2489 log.go:172] (0xc000720a50) (0xc0008d2000) Stream added, broadcasting: 3\nI0805 14:23:05.004443    2489 log.go:172] (0xc000720a50) Reply frame received for 3\nI0805 14:23:05.004481    2489 log.go:172] (0xc000720a50) (0xc0009fe6e0) Create stream\nI0805 14:23:05.004500    2489 log.go:172] (0xc000720a50) (0xc0009fe6e0) Stream added, broadcasting: 5\nI0805 14:23:05.005535    2489 log.go:172] (0xc000720a50) Reply frame received for 5\nI0805 14:23:05.092260    2489 log.go:172] (0xc000720a50) Data frame received for 3\nI0805 14:23:05.092302    2489 log.go:172] (0xc0008d2000) (3) Data frame handling\nI0805 14:23:05.092313    2489 log.go:172] (0xc0008d2000) (3) Data frame sent\nI0805 14:23:05.092319    2489 log.go:172] (0xc000720a50) Data frame received for 3\nI0805 14:23:05.092326    2489 log.go:172] (0xc0008d2000) (3) Data frame handling\nI0805 14:23:05.092396    2489 log.go:172] (0xc000720a50) Data frame received for 5\nI0805 14:23:05.092426    2489 log.go:172] (0xc0009fe6e0) (5) Data frame handling\nI0805 14:23:05.092448    2489 log.go:172] (0xc0009fe6e0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0805 14:23:05.092469    2489 log.go:172] (0xc000720a50) Data frame received for 5\nI0805 14:23:05.092492    2489 log.go:172] (0xc0009fe6e0) (5) Data frame handling\nI0805 14:23:05.094167    2489 log.go:172] (0xc000720a50) Data frame received for 1\nI0805 14:23:05.094194    2489 log.go:172] (0xc0009fe640) (1) Data frame handling\nI0805 14:23:05.094212    2489 log.go:172] (0xc0009fe640) (1) Data frame sent\nI0805 14:23:05.094262    2489 log.go:172] (0xc000720a50) (0xc0009fe640) Stream removed, broadcasting: 1\nI0805 14:23:05.094649    2489 log.go:172] (0xc000720a50) (0xc0009fe640) Stream removed, broadcasting: 1\nI0805 14:23:05.094673    2489 log.go:172] (0xc000720a50) (0xc0008d2000) Stream removed, broadcasting: 3\nI0805 14:23:05.094689    2489 log.go:172] (0xc000720a50) (0xc0009fe6e0) Stream removed, broadcasting: 5\n"
Aug  5 14:23:05.101: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Aug  5 14:23:05.101: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Aug  5 14:23:15.121: INFO: Waiting for StatefulSet statefulset-8188/ss2 to complete update
Aug  5 14:23:15.121: INFO: Waiting for Pod statefulset-8188/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Aug  5 14:23:15.121: INFO: Waiting for Pod statefulset-8188/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Aug  5 14:23:15.121: INFO: Waiting for Pod statefulset-8188/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Aug  5 14:23:25.129: INFO: Waiting for StatefulSet statefulset-8188/ss2 to complete update
Aug  5 14:23:25.129: INFO: Waiting for Pod statefulset-8188/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Aug  5 14:23:35.129: INFO: Waiting for StatefulSet statefulset-8188/ss2 to complete update
Aug  5 14:23:35.129: INFO: Waiting for Pod statefulset-8188/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Aug  5 14:23:45.129: INFO: Deleting all statefulset in ns statefulset-8188
Aug  5 14:23:45.132: INFO: Scaling statefulset ss2 to 0
Aug  5 14:24:15.171: INFO: Waiting for statefulset status.replicas updated to 0
Aug  5 14:24:15.174: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 14:24:15.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-8188" for this suite.
Aug  5 14:24:23.275: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 14:24:23.389: INFO: namespace statefulset-8188 deletion completed in 8.140513677s

• [SLOW TEST:172.099 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 14:24:23.390: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Aug  5 14:24:23.485: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-9577,SelfLink:/api/v1/namespaces/watch-9577/configmaps/e2e-watch-test-watch-closed,UID:c43d75fe-c1a6-4dd5-b6fa-a06bc3c1435f,ResourceVersion:3107817,Generation:0,CreationTimestamp:2020-08-05 14:24:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Aug  5 14:24:23.485: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-9577,SelfLink:/api/v1/namespaces/watch-9577/configmaps/e2e-watch-test-watch-closed,UID:c43d75fe-c1a6-4dd5-b6fa-a06bc3c1435f,ResourceVersion:3107818,Generation:0,CreationTimestamp:2020-08-05 14:24:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Aug  5 14:24:23.502: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-9577,SelfLink:/api/v1/namespaces/watch-9577/configmaps/e2e-watch-test-watch-closed,UID:c43d75fe-c1a6-4dd5-b6fa-a06bc3c1435f,ResourceVersion:3107819,Generation:0,CreationTimestamp:2020-08-05 14:24:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Aug  5 14:24:23.503: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-9577,SelfLink:/api/v1/namespaces/watch-9577/configmaps/e2e-watch-test-watch-closed,UID:c43d75fe-c1a6-4dd5-b6fa-a06bc3c1435f,ResourceVersion:3107820,Generation:0,CreationTimestamp:2020-08-05 14:24:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 14:24:23.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-9577" for this suite.
Aug  5 14:24:29.517: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 14:24:29.591: INFO: namespace watch-9577 deletion completed in 6.083974882s

• [SLOW TEST:6.201 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 14:24:29.591: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-5080b1eb-170c-488c-8da3-dac0348b0d0e
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-5080b1eb-170c-488c-8da3-dac0348b0d0e
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 14:24:35.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9957" for this suite.
Aug  5 14:24:57.804: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 14:24:57.861: INFO: namespace configmap-9957 deletion completed in 22.128128786s

• [SLOW TEST:28.269 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 14:24:57.861: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug  5 14:24:57.951: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e52aa04b-d1af-49d7-851d-933db4aa36f7" in namespace "projected-2735" to be "success or failure"
Aug  5 14:24:57.961: INFO: Pod "downwardapi-volume-e52aa04b-d1af-49d7-851d-933db4aa36f7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.482323ms
Aug  5 14:24:59.966: INFO: Pod "downwardapi-volume-e52aa04b-d1af-49d7-851d-933db4aa36f7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014827719s
Aug  5 14:25:01.970: INFO: Pod "downwardapi-volume-e52aa04b-d1af-49d7-851d-933db4aa36f7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019093164s
STEP: Saw pod success
Aug  5 14:25:01.970: INFO: Pod "downwardapi-volume-e52aa04b-d1af-49d7-851d-933db4aa36f7" satisfied condition "success or failure"
Aug  5 14:25:01.974: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-e52aa04b-d1af-49d7-851d-933db4aa36f7 container client-container: 
STEP: delete the pod
Aug  5 14:25:02.015: INFO: Waiting for pod downwardapi-volume-e52aa04b-d1af-49d7-851d-933db4aa36f7 to disappear
Aug  5 14:25:02.021: INFO: Pod downwardapi-volume-e52aa04b-d1af-49d7-851d-933db4aa36f7 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 14:25:02.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2735" for this suite.
Aug  5 14:25:08.037: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 14:25:08.117: INFO: namespace projected-2735 deletion completed in 6.091933775s

• [SLOW TEST:10.256 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 14:25:08.117: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-3498
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating stateful set ss in namespace statefulset-3498
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-3498
Aug  5 14:25:08.189: INFO: Found 0 stateful pods, waiting for 1
Aug  5 14:25:18.194: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Aug  5 14:25:18.198: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3498 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Aug  5 14:25:18.463: INFO: stderr: "I0805 14:25:18.345190    2509 log.go:172] (0xc000b0a000) (0xc000a521e0) Create stream\nI0805 14:25:18.345270    2509 log.go:172] (0xc000b0a000) (0xc000a521e0) Stream added, broadcasting: 1\nI0805 14:25:18.347707    2509 log.go:172] (0xc000b0a000) Reply frame received for 1\nI0805 14:25:18.347749    2509 log.go:172] (0xc000b0a000) (0xc000608460) Create stream\nI0805 14:25:18.347761    2509 log.go:172] (0xc000b0a000) (0xc000608460) Stream added, broadcasting: 3\nI0805 14:25:18.348695    2509 log.go:172] (0xc000b0a000) Reply frame received for 3\nI0805 14:25:18.348826    2509 log.go:172] (0xc000b0a000) (0xc000308000) Create stream\nI0805 14:25:18.348844    2509 log.go:172] (0xc000b0a000) (0xc000308000) Stream added, broadcasting: 5\nI0805 14:25:18.349822    2509 log.go:172] (0xc000b0a000) Reply frame received for 5\nI0805 14:25:18.423904    2509 log.go:172] (0xc000b0a000) Data frame received for 5\nI0805 14:25:18.423940    2509 log.go:172] (0xc000308000) (5) Data frame handling\nI0805 14:25:18.423959    2509 log.go:172] (0xc000308000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0805 14:25:18.453215    2509 log.go:172] (0xc000b0a000) Data frame received for 3\nI0805 14:25:18.453260    2509 log.go:172] (0xc000608460) (3) Data frame handling\nI0805 14:25:18.453306    2509 log.go:172] (0xc000608460) (3) Data frame sent\nI0805 14:25:18.453344    2509 log.go:172] (0xc000b0a000) Data frame received for 3\nI0805 14:25:18.453362    2509 log.go:172] (0xc000608460) (3) Data frame handling\nI0805 14:25:18.453661    2509 log.go:172] (0xc000b0a000) Data frame received for 5\nI0805 14:25:18.453688    2509 log.go:172] (0xc000308000) (5) Data frame handling\nI0805 14:25:18.455729    2509 log.go:172] (0xc000b0a000) Data frame received for 1\nI0805 14:25:18.455760    2509 log.go:172] (0xc000a521e0) (1) Data frame handling\nI0805 14:25:18.455779    2509 log.go:172] (0xc000a521e0) (1) Data frame sent\nI0805 14:25:18.455798    2509 log.go:172] (0xc000b0a000) (0xc000a521e0) Stream removed, broadcasting: 1\nI0805 14:25:18.455818    2509 log.go:172] (0xc000b0a000) Go away received\nI0805 14:25:18.456204    2509 log.go:172] (0xc000b0a000) (0xc000a521e0) Stream removed, broadcasting: 1\nI0805 14:25:18.456234    2509 log.go:172] (0xc000b0a000) (0xc000608460) Stream removed, broadcasting: 3\nI0805 14:25:18.456254    2509 log.go:172] (0xc000b0a000) (0xc000308000) Stream removed, broadcasting: 5\n"
Aug  5 14:25:18.463: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Aug  5 14:25:18.463: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Aug  5 14:25:18.468: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Aug  5 14:25:28.473: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Aug  5 14:25:28.473: INFO: Waiting for statefulset status.replicas updated to 0
Aug  5 14:25:28.499: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Aug  5 14:25:28.499: INFO: ss-0  iruya-worker  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:25:08 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:25:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:25:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:25:08 +0000 UTC  }]
Aug  5 14:25:28.499: INFO: 
Aug  5 14:25:28.499: INFO: StatefulSet ss has not reached scale 3, at 1
Aug  5 14:25:29.511: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.987662397s
Aug  5 14:25:30.632: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.9751068s
Aug  5 14:25:31.647: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.854824059s
Aug  5 14:25:32.652: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.839849848s
Aug  5 14:25:33.657: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.834714043s
Aug  5 14:25:34.664: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.82894616s
Aug  5 14:25:35.669: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.821927268s
Aug  5 14:25:36.675: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.816968436s
Aug  5 14:25:37.680: INFO: Verifying statefulset ss doesn't scale past 3 for another 811.219994ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-3498
Aug  5 14:25:38.685: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3498 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug  5 14:25:38.909: INFO: stderr: "I0805 14:25:38.806204    2530 log.go:172] (0xc00094c370) (0xc0009946e0) Create stream\nI0805 14:25:38.806265    2530 log.go:172] (0xc00094c370) (0xc0009946e0) Stream added, broadcasting: 1\nI0805 14:25:38.808602    2530 log.go:172] (0xc00094c370) Reply frame received for 1\nI0805 14:25:38.808639    2530 log.go:172] (0xc00094c370) (0xc000994780) Create stream\nI0805 14:25:38.808652    2530 log.go:172] (0xc00094c370) (0xc000994780) Stream added, broadcasting: 3\nI0805 14:25:38.809704    2530 log.go:172] (0xc00094c370) Reply frame received for 3\nI0805 14:25:38.809743    2530 log.go:172] (0xc00094c370) (0xc000638320) Create stream\nI0805 14:25:38.809764    2530 log.go:172] (0xc00094c370) (0xc000638320) Stream added, broadcasting: 5\nI0805 14:25:38.810724    2530 log.go:172] (0xc00094c370) Reply frame received for 5\nI0805 14:25:38.901485    2530 log.go:172] (0xc00094c370) Data frame received for 3\nI0805 14:25:38.901537    2530 log.go:172] (0xc000994780) (3) Data frame handling\nI0805 14:25:38.901549    2530 log.go:172] (0xc000994780) (3) Data frame sent\nI0805 14:25:38.901558    2530 log.go:172] (0xc00094c370) Data frame received for 3\nI0805 14:25:38.901566    2530 log.go:172] (0xc000994780) (3) Data frame handling\nI0805 14:25:38.901604    2530 log.go:172] (0xc00094c370) Data frame received for 5\nI0805 14:25:38.901626    2530 log.go:172] (0xc000638320) (5) Data frame handling\nI0805 14:25:38.901645    2530 log.go:172] (0xc000638320) (5) Data frame sent\nI0805 14:25:38.901660    2530 log.go:172] (0xc00094c370) Data frame received for 5\nI0805 14:25:38.901668    2530 log.go:172] (0xc000638320) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0805 14:25:38.902874    2530 log.go:172] (0xc00094c370) Data frame received for 1\nI0805 14:25:38.902899    2530 log.go:172] (0xc0009946e0) (1) Data frame handling\nI0805 14:25:38.902914    2530 log.go:172] (0xc0009946e0) (1) Data frame sent\nI0805 14:25:38.902944    2530 log.go:172] (0xc00094c370) (0xc0009946e0) Stream removed, broadcasting: 1\nI0805 14:25:38.902965    2530 log.go:172] (0xc00094c370) Go away received\nI0805 14:25:38.903478    2530 log.go:172] (0xc00094c370) (0xc0009946e0) Stream removed, broadcasting: 1\nI0805 14:25:38.903510    2530 log.go:172] (0xc00094c370) (0xc000994780) Stream removed, broadcasting: 3\nI0805 14:25:38.903529    2530 log.go:172] (0xc00094c370) (0xc000638320) Stream removed, broadcasting: 5\n"
Aug  5 14:25:38.910: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Aug  5 14:25:38.910: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Aug  5 14:25:38.910: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3498 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug  5 14:25:39.128: INFO: stderr: "I0805 14:25:39.048397    2550 log.go:172] (0xc000932160) (0xc000538640) Create stream\nI0805 14:25:39.048464    2550 log.go:172] (0xc000932160) (0xc000538640) Stream added, broadcasting: 1\nI0805 14:25:39.051239    2550 log.go:172] (0xc000932160) Reply frame received for 1\nI0805 14:25:39.051291    2550 log.go:172] (0xc000932160) (0xc0006d21e0) Create stream\nI0805 14:25:39.051306    2550 log.go:172] (0xc000932160) (0xc0006d21e0) Stream added, broadcasting: 3\nI0805 14:25:39.052160    2550 log.go:172] (0xc000932160) Reply frame received for 3\nI0805 14:25:39.052191    2550 log.go:172] (0xc000932160) (0xc000722000) Create stream\nI0805 14:25:39.052203    2550 log.go:172] (0xc000932160) (0xc000722000) Stream added, broadcasting: 5\nI0805 14:25:39.053191    2550 log.go:172] (0xc000932160) Reply frame received for 5\nI0805 14:25:39.119903    2550 log.go:172] (0xc000932160) Data frame received for 5\nI0805 14:25:39.119937    2550 log.go:172] (0xc000722000) (5) Data frame handling\nI0805 14:25:39.119948    2550 log.go:172] (0xc000722000) (5) Data frame sent\nI0805 14:25:39.119957    2550 log.go:172] (0xc000932160) Data frame received for 5\nI0805 14:25:39.119965    2550 log.go:172] (0xc000722000) (5) Data frame handling\nI0805 14:25:39.119977    2550 log.go:172] (0xc000932160) Data frame received for 3\nI0805 14:25:39.119991    2550 log.go:172] (0xc0006d21e0) (3) Data frame handling\nI0805 14:25:39.120005    2550 log.go:172] (0xc0006d21e0) (3) Data frame sent\nI0805 14:25:39.120013    2550 log.go:172] (0xc000932160) Data frame received for 3\nI0805 14:25:39.120024    2550 log.go:172] (0xc0006d21e0) (3) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0805 14:25:39.121787    2550 log.go:172] (0xc000932160) Data frame received for 1\nI0805 14:25:39.121807    2550 log.go:172] (0xc000538640) (1) Data frame handling\nI0805 14:25:39.121817    2550 log.go:172] (0xc000538640) (1) Data frame sent\nI0805 14:25:39.121829    2550 log.go:172] (0xc000932160) (0xc000538640) Stream removed, broadcasting: 1\nI0805 14:25:39.121886    2550 log.go:172] (0xc000932160) Go away received\nI0805 14:25:39.122245    2550 log.go:172] (0xc000932160) (0xc000538640) Stream removed, broadcasting: 1\nI0805 14:25:39.122266    2550 log.go:172] (0xc000932160) (0xc0006d21e0) Stream removed, broadcasting: 3\nI0805 14:25:39.122276    2550 log.go:172] (0xc000932160) (0xc000722000) Stream removed, broadcasting: 5\n"
Aug  5 14:25:39.128: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Aug  5 14:25:39.128: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Aug  5 14:25:39.128: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3498 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug  5 14:25:39.398: INFO: stderr: "I0805 14:25:39.316541    2572 log.go:172] (0xc000a5c0b0) (0xc00069e960) Create stream\nI0805 14:25:39.316631    2572 log.go:172] (0xc000a5c0b0) (0xc00069e960) Stream added, broadcasting: 1\nI0805 14:25:39.319529    2572 log.go:172] (0xc000a5c0b0) Reply frame received for 1\nI0805 14:25:39.319586    2572 log.go:172] (0xc000a5c0b0) (0xc0006cc000) Create stream\nI0805 14:25:39.319604    2572 log.go:172] (0xc000a5c0b0) (0xc0006cc000) Stream added, broadcasting: 3\nI0805 14:25:39.320901    2572 log.go:172] (0xc000a5c0b0) Reply frame received for 3\nI0805 14:25:39.320968    2572 log.go:172] (0xc000a5c0b0) (0xc00069ea00) Create stream\nI0805 14:25:39.320986    2572 log.go:172] (0xc000a5c0b0) (0xc00069ea00) Stream added, broadcasting: 5\nI0805 14:25:39.322153    2572 log.go:172] (0xc000a5c0b0) Reply frame received for 5\nI0805 14:25:39.386313    2572 log.go:172] (0xc000a5c0b0) Data frame received for 3\nI0805 14:25:39.386363    2572 log.go:172] (0xc0006cc000) (3) Data frame handling\nI0805 14:25:39.386374    2572 log.go:172] (0xc0006cc000) (3) Data frame sent\nI0805 14:25:39.386380    2572 log.go:172] (0xc000a5c0b0) Data frame received for 3\nI0805 14:25:39.386384    2572 log.go:172] (0xc0006cc000) (3) Data frame handling\nI0805 14:25:39.386441    2572 log.go:172] (0xc000a5c0b0) Data frame received for 5\nI0805 14:25:39.386473    2572 log.go:172] (0xc00069ea00) (5) Data frame handling\nI0805 14:25:39.386522    2572 log.go:172] (0xc00069ea00) (5) Data frame sent\nI0805 14:25:39.386543    2572 log.go:172] (0xc000a5c0b0) Data frame received for 5\nI0805 14:25:39.386560    2572 log.go:172] (0xc00069ea00) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0805 14:25:39.392717    2572 log.go:172] (0xc000a5c0b0) Data frame received for 1\nI0805 14:25:39.392836    2572 log.go:172] (0xc00069e960) (1) Data frame handling\nI0805 14:25:39.392864    2572 log.go:172] (0xc00069e960) (1) Data frame sent\nI0805 14:25:39.392880    2572 log.go:172] (0xc000a5c0b0) (0xc00069e960) Stream removed, broadcasting: 1\nI0805 14:25:39.392925    2572 log.go:172] (0xc000a5c0b0) Go away received\nI0805 14:25:39.393178    2572 log.go:172] (0xc000a5c0b0) (0xc00069e960) Stream removed, broadcasting: 1\nI0805 14:25:39.393192    2572 log.go:172] (0xc000a5c0b0) (0xc0006cc000) Stream removed, broadcasting: 3\nI0805 14:25:39.393199    2572 log.go:172] (0xc000a5c0b0) (0xc00069ea00) Stream removed, broadcasting: 5\n"
Aug  5 14:25:39.398: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Aug  5 14:25:39.398: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Aug  5 14:25:39.402: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false
Aug  5 14:25:49.407: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Aug  5 14:25:49.407: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Aug  5 14:25:49.407: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Aug  5 14:25:49.411: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3498 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Aug  5 14:25:49.643: INFO: stderr: "I0805 14:25:49.548350    2592 log.go:172] (0xc0009e8420) (0xc0005ca6e0) Create stream\nI0805 14:25:49.548409    2592 log.go:172] (0xc0009e8420) (0xc0005ca6e0) Stream added, broadcasting: 1\nI0805 14:25:49.551513    2592 log.go:172] (0xc0009e8420) Reply frame received for 1\nI0805 14:25:49.551541    2592 log.go:172] (0xc0009e8420) (0xc0005ca000) Create stream\nI0805 14:25:49.551549    2592 log.go:172] (0xc0009e8420) (0xc0005ca000) Stream added, broadcasting: 3\nI0805 14:25:49.552523    2592 log.go:172] (0xc0009e8420) Reply frame received for 3\nI0805 14:25:49.552564    2592 log.go:172] (0xc0009e8420) (0xc0005aa000) Create stream\nI0805 14:25:49.552574    2592 log.go:172] (0xc0009e8420) (0xc0005aa000) Stream added, broadcasting: 5\nI0805 14:25:49.553401    2592 log.go:172] (0xc0009e8420) Reply frame received for 5\nI0805 14:25:49.636037    2592 log.go:172] (0xc0009e8420) Data frame received for 5\nI0805 14:25:49.636072    2592 log.go:172] (0xc0005aa000) (5) Data frame handling\nI0805 14:25:49.636081    2592 log.go:172] (0xc0005aa000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0805 14:25:49.636095    2592 log.go:172] (0xc0009e8420) Data frame received for 3\nI0805 14:25:49.636102    2592 log.go:172] (0xc0005ca000) (3) Data frame handling\nI0805 14:25:49.636111    2592 log.go:172] (0xc0005ca000) (3) Data frame sent\nI0805 14:25:49.636119    2592 log.go:172] (0xc0009e8420) Data frame received for 3\nI0805 14:25:49.636123    2592 log.go:172] (0xc0005ca000) (3) Data frame handling\nI0805 14:25:49.636315    2592 log.go:172] (0xc0009e8420) Data frame received for 5\nI0805 14:25:49.636343    2592 log.go:172] (0xc0005aa000) (5) Data frame handling\nI0805 14:25:49.637657    2592 log.go:172] (0xc0009e8420) Data frame received for 1\nI0805 14:25:49.637681    2592 log.go:172] (0xc0005ca6e0) (1) Data frame handling\nI0805 14:25:49.637689    2592 log.go:172] (0xc0005ca6e0) (1) Data frame sent\nI0805 14:25:49.637699    2592 log.go:172] (0xc0009e8420) (0xc0005ca6e0) Stream removed, broadcasting: 1\nI0805 14:25:49.637767    2592 log.go:172] (0xc0009e8420) Go away received\nI0805 14:25:49.637931    2592 log.go:172] (0xc0009e8420) (0xc0005ca6e0) Stream removed, broadcasting: 1\nI0805 14:25:49.637944    2592 log.go:172] (0xc0009e8420) (0xc0005ca000) Stream removed, broadcasting: 3\nI0805 14:25:49.637950    2592 log.go:172] (0xc0009e8420) (0xc0005aa000) Stream removed, broadcasting: 5\n"
Aug  5 14:25:49.643: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Aug  5 14:25:49.643: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Aug  5 14:25:49.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3498 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Aug  5 14:25:49.883: INFO: stderr: "I0805 14:25:49.773037    2612 log.go:172] (0xc000932000) (0xc00085a0a0) Create stream\nI0805 14:25:49.773084    2612 log.go:172] (0xc000932000) (0xc00085a0a0) Stream added, broadcasting: 1\nI0805 14:25:49.775120    2612 log.go:172] (0xc000932000) Reply frame received for 1\nI0805 14:25:49.775151    2612 log.go:172] (0xc000932000) (0xc000852000) Create stream\nI0805 14:25:49.775161    2612 log.go:172] (0xc000932000) (0xc000852000) Stream added, broadcasting: 3\nI0805 14:25:49.776009    2612 log.go:172] (0xc000932000) Reply frame received for 3\nI0805 14:25:49.776064    2612 log.go:172] (0xc000932000) (0xc0005aa140) Create stream\nI0805 14:25:49.776088    2612 log.go:172] (0xc000932000) (0xc0005aa140) Stream added, broadcasting: 5\nI0805 14:25:49.777070    2612 log.go:172] (0xc000932000) Reply frame received for 5\nI0805 14:25:49.839937    2612 log.go:172] (0xc000932000) Data frame received for 5\nI0805 14:25:49.839965    2612 log.go:172] (0xc0005aa140) (5) Data frame handling\nI0805 14:25:49.839984    2612 log.go:172] (0xc0005aa140) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0805 14:25:49.875804    2612 log.go:172] (0xc000932000) Data frame received for 3\nI0805 14:25:49.875847    2612 log.go:172] (0xc000852000) (3) Data frame handling\nI0805 14:25:49.875910    2612 log.go:172] (0xc000852000) (3) Data frame sent\nI0805 14:25:49.875927    2612 log.go:172] (0xc000932000) Data frame received for 3\nI0805 14:25:49.875936    2612 log.go:172] (0xc000852000) (3) Data frame handling\nI0805 14:25:49.875950    2612 log.go:172] (0xc000932000) Data frame received for 5\nI0805 14:25:49.875963    2612 log.go:172] (0xc0005aa140) (5) Data frame handling\nI0805 14:25:49.878015    2612 log.go:172] (0xc000932000) Data frame received for 1\nI0805 14:25:49.878040    2612 log.go:172] (0xc00085a0a0) (1) Data frame handling\nI0805 14:25:49.878052    2612 log.go:172] (0xc00085a0a0) (1) Data frame sent\nI0805 14:25:49.878066    2612 log.go:172] (0xc000932000) (0xc00085a0a0) Stream removed, broadcasting: 1\nI0805 14:25:49.878135    2612 log.go:172] (0xc000932000) Go away received\nI0805 14:25:49.878449    2612 log.go:172] (0xc000932000) (0xc00085a0a0) Stream removed, broadcasting: 1\nI0805 14:25:49.878479    2612 log.go:172] (0xc000932000) (0xc000852000) Stream removed, broadcasting: 3\nI0805 14:25:49.878501    2612 log.go:172] (0xc000932000) (0xc0005aa140) Stream removed, broadcasting: 5\n"
Aug  5 14:25:49.884: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Aug  5 14:25:49.884: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Aug  5 14:25:49.884: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3498 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Aug  5 14:25:50.113: INFO: stderr: "I0805 14:25:50.017965    2633 log.go:172] (0xc0005a6420) (0xc0006ca960) Create stream\nI0805 14:25:50.018052    2633 log.go:172] (0xc0005a6420) (0xc0006ca960) Stream added, broadcasting: 1\nI0805 14:25:50.026111    2633 log.go:172] (0xc0005a6420) Reply frame received for 1\nI0805 14:25:50.026153    2633 log.go:172] (0xc0005a6420) (0xc0006ca280) Create stream\nI0805 14:25:50.026169    2633 log.go:172] (0xc0005a6420) (0xc0006ca280) Stream added, broadcasting: 3\nI0805 14:25:50.026888    2633 log.go:172] (0xc0005a6420) Reply frame received for 3\nI0805 14:25:50.026911    2633 log.go:172] (0xc0005a6420) (0xc0006ca320) Create stream\nI0805 14:25:50.026917    2633 log.go:172] (0xc0005a6420) (0xc0006ca320) Stream added, broadcasting: 5\nI0805 14:25:50.027596    2633 log.go:172] (0xc0005a6420) Reply frame received for 5\nI0805 14:25:50.076658    2633 log.go:172] (0xc0005a6420) Data frame received for 5\nI0805 14:25:50.076690    2633 log.go:172] (0xc0006ca320) (5) Data frame handling\nI0805 14:25:50.076710    2633 log.go:172] (0xc0006ca320) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0805 14:25:50.103693    2633 log.go:172] (0xc0005a6420) Data frame received for 3\nI0805 14:25:50.103739    2633 log.go:172] (0xc0006ca280) (3) Data frame handling\nI0805 14:25:50.103759    2633 log.go:172] (0xc0006ca280) (3) Data frame sent\nI0805 14:25:50.103787    2633 log.go:172] (0xc0005a6420) Data frame received for 3\nI0805 14:25:50.103826    2633 log.go:172] (0xc0006ca280) (3) Data frame handling\nI0805 14:25:50.104123    2633 log.go:172] (0xc0005a6420) Data frame received for 5\nI0805 14:25:50.104216    2633 log.go:172] (0xc0006ca320) (5) Data frame handling\nI0805 14:25:50.106202    2633 log.go:172] (0xc0005a6420) Data frame received for 1\nI0805 14:25:50.106229    2633 log.go:172] (0xc0006ca960) (1) Data frame handling\nI0805 14:25:50.106250    2633 log.go:172] (0xc0006ca960) (1) Data frame sent\nI0805 14:25:50.106273    2633 log.go:172] (0xc0005a6420) (0xc0006ca960) Stream removed, broadcasting: 1\nI0805 14:25:50.106289    2633 log.go:172] (0xc0005a6420) Go away received\nI0805 14:25:50.106735    2633 log.go:172] (0xc0005a6420) (0xc0006ca960) Stream removed, broadcasting: 1\nI0805 14:25:50.106762    2633 log.go:172] (0xc0005a6420) (0xc0006ca280) Stream removed, broadcasting: 3\nI0805 14:25:50.106774    2633 log.go:172] (0xc0005a6420) (0xc0006ca320) Stream removed, broadcasting: 5\n"
Aug  5 14:25:50.113: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Aug  5 14:25:50.113: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Aug  5 14:25:50.113: INFO: Waiting for statefulset status.replicas updated to 0
Aug  5 14:25:50.116: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3
Aug  5 14:26:00.125: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Aug  5 14:26:00.125: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Aug  5 14:26:00.125: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Aug  5 14:26:00.141: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug  5 14:26:00.141: INFO: ss-0  iruya-worker   Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:25:08 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:25:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:25:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:25:08 +0000 UTC  }]
Aug  5 14:26:00.141: INFO: ss-1  iruya-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:25:28 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:25:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:25:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:25:28 +0000 UTC  }]
Aug  5 14:26:00.141: INFO: ss-2  iruya-worker   Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:25:28 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:25:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:25:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:25:28 +0000 UTC  }]
Aug  5 14:26:00.141: INFO: 
Aug  5 14:26:00.141: INFO: StatefulSet ss has not reached scale 0, at 3
Aug  5 14:26:01.192: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug  5 14:26:01.192: INFO: ss-0  iruya-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:25:08 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:25:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:25:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:25:08 +0000 UTC  }]
Aug  5 14:26:01.192: INFO: ss-1  iruya-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:25:28 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:25:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:25:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:25:28 +0000 UTC  }]
Aug  5 14:26:01.192: INFO: ss-2  iruya-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:25:28 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:25:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:25:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:25:28 +0000 UTC  }]
Aug  5 14:26:01.192: INFO: 
Aug  5 14:26:01.192: INFO: StatefulSet ss has not reached scale 0, at 3
Aug  5 14:26:02.197: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug  5 14:26:02.197: INFO: ss-0  iruya-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:25:08 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:25:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:25:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:25:08 +0000 UTC  }]
Aug  5 14:26:02.197: INFO: ss-1  iruya-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:25:28 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:25:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:25:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:25:28 +0000 UTC  }]
Aug  5 14:26:02.197: INFO: ss-2  iruya-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:25:28 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:25:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:25:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:25:28 +0000 UTC  }]
Aug  5 14:26:02.197: INFO: 
Aug  5 14:26:02.197: INFO: StatefulSet ss has not reached scale 0, at 3
Aug  5 14:26:03.202: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug  5 14:26:03.202: INFO: ss-0  iruya-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:25:08 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:25:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:25:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:25:08 +0000 UTC  }]
Aug  5 14:26:03.203: INFO: ss-1  iruya-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:25:28 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:25:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:25:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:25:28 +0000 UTC  }]
Aug  5 14:26:03.203: INFO: ss-2  iruya-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:25:28 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:25:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:25:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:25:28 +0000 UTC  }]
Aug  5 14:26:03.203: INFO: 
Aug  5 14:26:03.203: INFO: StatefulSet ss has not reached scale 0, at 3
Aug  5 14:26:04.208: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug  5 14:26:04.208: INFO: ss-0  iruya-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:25:08 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:25:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:25:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:25:08 +0000 UTC  }]
Aug  5 14:26:04.208: INFO: ss-1  iruya-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:25:28 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:25:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:25:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:25:28 +0000 UTC  }]
Aug  5 14:26:04.208: INFO: ss-2  iruya-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:25:28 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:25:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:25:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:25:28 +0000 UTC  }]
Aug  5 14:26:04.208: INFO: 
Aug  5 14:26:04.208: INFO: StatefulSet ss has not reached scale 0, at 3
Aug  5 14:26:05.221: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug  5 14:26:05.221: INFO: ss-1  iruya-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:25:28 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:25:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:25:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:25:28 +0000 UTC  }]
Aug  5 14:26:05.221: INFO: 
Aug  5 14:26:05.221: INFO: StatefulSet ss has not reached scale 0, at 1
Aug  5 14:26:06.226: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug  5 14:26:06.226: INFO: ss-1  iruya-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:25:28 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:25:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:25:50 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-05 14:25:28 +0000 UTC  }]
Aug  5 14:26:06.226: INFO: 
Aug  5 14:26:06.227: INFO: StatefulSet ss has not reached scale 0, at 1
Aug  5 14:26:07.231: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.905577582s
Aug  5 14:26:08.235: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.901260267s
Aug  5 14:26:09.239: INFO: Verifying statefulset ss doesn't scale past 0 for another 897.288073ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-3498
Aug  5 14:26:10.244: INFO: Scaling statefulset ss to 0
Aug  5 14:26:10.255: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Aug  5 14:26:10.258: INFO: Deleting all statefulset in ns statefulset-3498
Aug  5 14:26:10.260: INFO: Scaling statefulset ss to 0
Aug  5 14:26:10.269: INFO: Waiting for statefulset status.replicas updated to 0
Aug  5 14:26:10.271: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 14:26:10.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-3498" for this suite.
Aug  5 14:26:16.299: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 14:26:16.381: INFO: namespace statefulset-3498 deletion completed in 6.092069909s

• [SLOW TEST:68.264 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 14:26:16.381: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-configmap-rdl5
STEP: Creating a pod to test atomic-volume-subpath
Aug  5 14:26:16.450: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-rdl5" in namespace "subpath-1030" to be "success or failure"
Aug  5 14:26:16.454: INFO: Pod "pod-subpath-test-configmap-rdl5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.409697ms
Aug  5 14:26:18.458: INFO: Pod "pod-subpath-test-configmap-rdl5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007582274s
Aug  5 14:26:20.473: INFO: Pod "pod-subpath-test-configmap-rdl5": Phase="Running", Reason="", readiness=true. Elapsed: 4.022751984s
Aug  5 14:26:22.477: INFO: Pod "pod-subpath-test-configmap-rdl5": Phase="Running", Reason="", readiness=true. Elapsed: 6.026588613s
Aug  5 14:26:24.481: INFO: Pod "pod-subpath-test-configmap-rdl5": Phase="Running", Reason="", readiness=true. Elapsed: 8.030833591s
Aug  5 14:26:26.486: INFO: Pod "pod-subpath-test-configmap-rdl5": Phase="Running", Reason="", readiness=true. Elapsed: 10.035603114s
Aug  5 14:26:28.491: INFO: Pod "pod-subpath-test-configmap-rdl5": Phase="Running", Reason="", readiness=true. Elapsed: 12.040131473s
Aug  5 14:26:30.495: INFO: Pod "pod-subpath-test-configmap-rdl5": Phase="Running", Reason="", readiness=true. Elapsed: 14.044125662s
Aug  5 14:26:32.499: INFO: Pod "pod-subpath-test-configmap-rdl5": Phase="Running", Reason="", readiness=true. Elapsed: 16.048168126s
Aug  5 14:26:34.502: INFO: Pod "pod-subpath-test-configmap-rdl5": Phase="Running", Reason="", readiness=true. Elapsed: 18.052121808s
Aug  5 14:26:36.506: INFO: Pod "pod-subpath-test-configmap-rdl5": Phase="Running", Reason="", readiness=true. Elapsed: 20.055992184s
Aug  5 14:26:38.511: INFO: Pod "pod-subpath-test-configmap-rdl5": Phase="Running", Reason="", readiness=true. Elapsed: 22.060332527s
Aug  5 14:26:40.515: INFO: Pod "pod-subpath-test-configmap-rdl5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.064998299s
STEP: Saw pod success
Aug  5 14:26:40.515: INFO: Pod "pod-subpath-test-configmap-rdl5" satisfied condition "success or failure"
Aug  5 14:26:40.519: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-configmap-rdl5 container test-container-subpath-configmap-rdl5: 
STEP: delete the pod
Aug  5 14:26:40.534: INFO: Waiting for pod pod-subpath-test-configmap-rdl5 to disappear
Aug  5 14:26:40.635: INFO: Pod pod-subpath-test-configmap-rdl5 no longer exists
STEP: Deleting pod pod-subpath-test-configmap-rdl5
Aug  5 14:26:40.635: INFO: Deleting pod "pod-subpath-test-configmap-rdl5" in namespace "subpath-1030"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 14:26:40.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-1030" for this suite.
Aug  5 14:26:46.837: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 14:26:46.914: INFO: namespace subpath-1030 deletion completed in 6.200774307s

• [SLOW TEST:30.533 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 14:26:46.915: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-projected-all-test-volume-f90596cd-049d-40b6-b664-1bcdea3dfde2
STEP: Creating secret with name secret-projected-all-test-volume-eef5c10a-6b59-4fa7-b498-25926dbc3822
STEP: Creating a pod to test Check all projections for projected volume plugin
Aug  5 14:26:47.139: INFO: Waiting up to 5m0s for pod "projected-volume-5be5b691-760b-4135-b052-4ca7246357c6" in namespace "projected-1542" to be "success or failure"
Aug  5 14:26:47.153: INFO: Pod "projected-volume-5be5b691-760b-4135-b052-4ca7246357c6": Phase="Pending", Reason="", readiness=false. Elapsed: 14.014856ms
Aug  5 14:26:49.157: INFO: Pod "projected-volume-5be5b691-760b-4135-b052-4ca7246357c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017721849s
Aug  5 14:26:51.162: INFO: Pod "projected-volume-5be5b691-760b-4135-b052-4ca7246357c6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023430047s
STEP: Saw pod success
Aug  5 14:26:51.162: INFO: Pod "projected-volume-5be5b691-760b-4135-b052-4ca7246357c6" satisfied condition "success or failure"
Aug  5 14:26:51.165: INFO: Trying to get logs from node iruya-worker pod projected-volume-5be5b691-760b-4135-b052-4ca7246357c6 container projected-all-volume-test: 
STEP: delete the pod
Aug  5 14:26:51.184: INFO: Waiting for pod projected-volume-5be5b691-760b-4135-b052-4ca7246357c6 to disappear
Aug  5 14:26:51.188: INFO: Pod projected-volume-5be5b691-760b-4135-b052-4ca7246357c6 no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 14:26:51.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1542" for this suite.
Aug  5 14:26:57.216: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 14:26:57.288: INFO: namespace projected-1542 deletion completed in 6.096429902s

• [SLOW TEST:10.373 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 14:26:57.288: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Aug  5 14:26:57.384: INFO: Pod name pod-release: Found 0 pods out of 1
Aug  5 14:27:02.389: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 14:27:02.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-7954" for this suite.
Aug  5 14:27:08.526: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 14:27:08.581: INFO: namespace replication-controller-7954 deletion completed in 6.136276628s

• [SLOW TEST:11.293 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 14:27:08.582: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Aug  5 14:27:08.815: INFO: Waiting up to 5m0s for pod "pod-549f0fb1-a92a-411c-b74b-d9400d25f815" in namespace "emptydir-1171" to be "success or failure"
Aug  5 14:27:08.827: INFO: Pod "pod-549f0fb1-a92a-411c-b74b-d9400d25f815": Phase="Pending", Reason="", readiness=false. Elapsed: 11.293147ms
Aug  5 14:27:10.899: INFO: Pod "pod-549f0fb1-a92a-411c-b74b-d9400d25f815": Phase="Pending", Reason="", readiness=false. Elapsed: 2.083542978s
Aug  5 14:27:12.903: INFO: Pod "pod-549f0fb1-a92a-411c-b74b-d9400d25f815": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.08740595s
STEP: Saw pod success
Aug  5 14:27:12.903: INFO: Pod "pod-549f0fb1-a92a-411c-b74b-d9400d25f815" satisfied condition "success or failure"
Aug  5 14:27:12.905: INFO: Trying to get logs from node iruya-worker2 pod pod-549f0fb1-a92a-411c-b74b-d9400d25f815 container test-container: 
STEP: delete the pod
Aug  5 14:27:12.930: INFO: Waiting for pod pod-549f0fb1-a92a-411c-b74b-d9400d25f815 to disappear
Aug  5 14:27:12.940: INFO: Pod pod-549f0fb1-a92a-411c-b74b-d9400d25f815 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 14:27:12.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1171" for this suite.
Aug  5 14:27:18.956: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 14:27:19.025: INFO: namespace emptydir-1171 deletion completed in 6.081567408s

• [SLOW TEST:10.443 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 14:27:19.025: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Aug  5 14:27:19.153: INFO: Waiting up to 5m0s for pod "pod-9482f78d-4b92-43b0-9fbc-35d28d8931db" in namespace "emptydir-6158" to be "success or failure"
Aug  5 14:27:19.165: INFO: Pod "pod-9482f78d-4b92-43b0-9fbc-35d28d8931db": Phase="Pending", Reason="", readiness=false. Elapsed: 12.5448ms
Aug  5 14:27:21.169: INFO: Pod "pod-9482f78d-4b92-43b0-9fbc-35d28d8931db": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016688717s
Aug  5 14:27:23.174: INFO: Pod "pod-9482f78d-4b92-43b0-9fbc-35d28d8931db": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020945876s
STEP: Saw pod success
Aug  5 14:27:23.174: INFO: Pod "pod-9482f78d-4b92-43b0-9fbc-35d28d8931db" satisfied condition "success or failure"
Aug  5 14:27:23.176: INFO: Trying to get logs from node iruya-worker pod pod-9482f78d-4b92-43b0-9fbc-35d28d8931db container test-container: 
STEP: delete the pod
Aug  5 14:27:23.215: INFO: Waiting for pod pod-9482f78d-4b92-43b0-9fbc-35d28d8931db to disappear
Aug  5 14:27:23.243: INFO: Pod pod-9482f78d-4b92-43b0-9fbc-35d28d8931db no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 14:27:23.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6158" for this suite.
Aug  5 14:27:29.286: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 14:27:29.353: INFO: namespace emptydir-6158 deletion completed in 6.105015906s

• [SLOW TEST:10.328 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 14:27:29.353: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Aug  5 14:27:29.417: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-1992'
Aug  5 14:27:29.557: INFO: stderr: ""
Aug  5 14:27:29.557: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Aug  5 14:27:34.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-1992 -o json'
Aug  5 14:27:34.718: INFO: stderr: ""
Aug  5 14:27:34.719: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-08-05T14:27:29Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"kubectl-1992\",\n        \"resourceVersion\": \"3108562\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-1992/pods/e2e-test-nginx-pod\",\n        \"uid\": \"bba5f699-ccab-4c6a-8b1c-f2e7d4a7bf35\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-hn42h\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"iruya-worker2\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-hn42h\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-hn42h\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-08-05T14:27:29Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-08-05T14:27:32Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-08-05T14:27:32Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-08-05T14:27:29Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"containerd://047778d26fccef14c361efb63934850ff47d413f9af04f82e2a0867193332ae4\",\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-08-05T14:27:32Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"172.18.0.7\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.244.2.216\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-08-05T14:27:29Z\"\n    }\n}\n"
STEP: replace the image in the pod
Aug  5 14:27:34.719: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-1992'
Aug  5 14:27:35.078: INFO: stderr: ""
Aug  5 14:27:35.078: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726
Aug  5 14:27:35.080: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-1992'
Aug  5 14:27:38.565: INFO: stderr: ""
Aug  5 14:27:38.565: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 14:27:38.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1992" for this suite.
Aug  5 14:27:44.580: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 14:27:44.656: INFO: namespace kubectl-1992 deletion completed in 6.087363447s

• [SLOW TEST:15.303 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 14:27:44.656: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug  5 14:27:44.746: INFO: Waiting up to 5m0s for pod "downwardapi-volume-32473254-c8a8-45dd-8498-fce029bf5f95" in namespace "projected-4435" to be "success or failure"
Aug  5 14:27:44.756: INFO: Pod "downwardapi-volume-32473254-c8a8-45dd-8498-fce029bf5f95": Phase="Pending", Reason="", readiness=false. Elapsed: 10.178912ms
Aug  5 14:27:46.760: INFO: Pod "downwardapi-volume-32473254-c8a8-45dd-8498-fce029bf5f95": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014830311s
Aug  5 14:27:48.764: INFO: Pod "downwardapi-volume-32473254-c8a8-45dd-8498-fce029bf5f95": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018175817s
STEP: Saw pod success
Aug  5 14:27:48.764: INFO: Pod "downwardapi-volume-32473254-c8a8-45dd-8498-fce029bf5f95" satisfied condition "success or failure"
Aug  5 14:27:48.766: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-32473254-c8a8-45dd-8498-fce029bf5f95 container client-container: 
STEP: delete the pod
Aug  5 14:27:48.787: INFO: Waiting for pod downwardapi-volume-32473254-c8a8-45dd-8498-fce029bf5f95 to disappear
Aug  5 14:27:48.791: INFO: Pod downwardapi-volume-32473254-c8a8-45dd-8498-fce029bf5f95 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 14:27:48.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4435" for this suite.
Aug  5 14:27:54.808: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 14:27:54.880: INFO: namespace projected-4435 deletion completed in 6.086101734s

• [SLOW TEST:10.224 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 14:27:54.881: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug  5 14:27:54.970: INFO: Waiting up to 5m0s for pod "downwardapi-volume-258bddd8-763d-46cd-9c4b-ad35b3f7497e" in namespace "downward-api-2524" to be "success or failure"
Aug  5 14:27:55.026: INFO: Pod "downwardapi-volume-258bddd8-763d-46cd-9c4b-ad35b3f7497e": Phase="Pending", Reason="", readiness=false. Elapsed: 56.071646ms
Aug  5 14:27:57.029: INFO: Pod "downwardapi-volume-258bddd8-763d-46cd-9c4b-ad35b3f7497e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059600381s
Aug  5 14:27:59.034: INFO: Pod "downwardapi-volume-258bddd8-763d-46cd-9c4b-ad35b3f7497e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.063880728s
STEP: Saw pod success
Aug  5 14:27:59.034: INFO: Pod "downwardapi-volume-258bddd8-763d-46cd-9c4b-ad35b3f7497e" satisfied condition "success or failure"
Aug  5 14:27:59.037: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-258bddd8-763d-46cd-9c4b-ad35b3f7497e container client-container: 
STEP: delete the pod
Aug  5 14:27:59.073: INFO: Waiting for pod downwardapi-volume-258bddd8-763d-46cd-9c4b-ad35b3f7497e to disappear
Aug  5 14:27:59.127: INFO: Pod downwardapi-volume-258bddd8-763d-46cd-9c4b-ad35b3f7497e no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 14:27:59.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2524" for this suite.
Aug  5 14:28:05.145: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 14:28:05.249: INFO: namespace downward-api-2524 deletion completed in 6.117384337s

• [SLOW TEST:10.368 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 14:28:05.250: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Aug  5 14:28:05.298: INFO: Waiting up to 5m0s for pod "pod-739fefbf-ba47-4a16-ad65-147cc17fcedb" in namespace "emptydir-8962" to be "success or failure"
Aug  5 14:28:05.310: INFO: Pod "pod-739fefbf-ba47-4a16-ad65-147cc17fcedb": Phase="Pending", Reason="", readiness=false. Elapsed: 11.849742ms
Aug  5 14:28:07.313: INFO: Pod "pod-739fefbf-ba47-4a16-ad65-147cc17fcedb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015351693s
Aug  5 14:28:09.317: INFO: Pod "pod-739fefbf-ba47-4a16-ad65-147cc17fcedb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019078505s
STEP: Saw pod success
Aug  5 14:28:09.317: INFO: Pod "pod-739fefbf-ba47-4a16-ad65-147cc17fcedb" satisfied condition "success or failure"
Aug  5 14:28:09.320: INFO: Trying to get logs from node iruya-worker2 pod pod-739fefbf-ba47-4a16-ad65-147cc17fcedb container test-container: 
STEP: delete the pod
Aug  5 14:28:09.341: INFO: Waiting for pod pod-739fefbf-ba47-4a16-ad65-147cc17fcedb to disappear
Aug  5 14:28:09.345: INFO: Pod pod-739fefbf-ba47-4a16-ad65-147cc17fcedb no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 14:28:09.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8962" for this suite.
Aug  5 14:28:15.413: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 14:28:15.480: INFO: namespace emptydir-8962 deletion completed in 6.131350975s

• [SLOW TEST:10.230 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 14:28:15.480: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-b486241b-ae97-4966-9894-c27fe7e9d856
STEP: Creating a pod to test consume secrets
Aug  5 14:28:15.567: INFO: Waiting up to 5m0s for pod "pod-secrets-0faee6a6-b2de-42b7-be2e-8f6d18b846ca" in namespace "secrets-6480" to be "success or failure"
Aug  5 14:28:15.571: INFO: Pod "pod-secrets-0faee6a6-b2de-42b7-be2e-8f6d18b846ca": Phase="Pending", Reason="", readiness=false. Elapsed: 3.072097ms
Aug  5 14:28:17.575: INFO: Pod "pod-secrets-0faee6a6-b2de-42b7-be2e-8f6d18b846ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007247465s
Aug  5 14:28:19.579: INFO: Pod "pod-secrets-0faee6a6-b2de-42b7-be2e-8f6d18b846ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011065887s
STEP: Saw pod success
Aug  5 14:28:19.579: INFO: Pod "pod-secrets-0faee6a6-b2de-42b7-be2e-8f6d18b846ca" satisfied condition "success or failure"
Aug  5 14:28:19.581: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-0faee6a6-b2de-42b7-be2e-8f6d18b846ca container secret-volume-test: 
STEP: delete the pod
Aug  5 14:28:19.620: INFO: Waiting for pod pod-secrets-0faee6a6-b2de-42b7-be2e-8f6d18b846ca to disappear
Aug  5 14:28:19.678: INFO: Pod pod-secrets-0faee6a6-b2de-42b7-be2e-8f6d18b846ca no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 14:28:19.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6480" for this suite.
Aug  5 14:28:25.713: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 14:28:25.793: INFO: namespace secrets-6480 deletion completed in 6.110362208s

• [SLOW TEST:10.313 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 14:28:25.793: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-86f7ea2a-dd1d-473f-9311-0f565f0bddde
STEP: Creating a pod to test consume secrets
Aug  5 14:28:25.951: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-16760f84-04b1-4565-8e96-7d3e2b5a6c27" in namespace "projected-4086" to be "success or failure"
Aug  5 14:28:25.960: INFO: Pod "pod-projected-secrets-16760f84-04b1-4565-8e96-7d3e2b5a6c27": Phase="Pending", Reason="", readiness=false. Elapsed: 8.781077ms
Aug  5 14:28:27.996: INFO: Pod "pod-projected-secrets-16760f84-04b1-4565-8e96-7d3e2b5a6c27": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044722971s
Aug  5 14:28:30.000: INFO: Pod "pod-projected-secrets-16760f84-04b1-4565-8e96-7d3e2b5a6c27": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048765094s
STEP: Saw pod success
Aug  5 14:28:30.000: INFO: Pod "pod-projected-secrets-16760f84-04b1-4565-8e96-7d3e2b5a6c27" satisfied condition "success or failure"
Aug  5 14:28:30.003: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-16760f84-04b1-4565-8e96-7d3e2b5a6c27 container projected-secret-volume-test: 
STEP: delete the pod
Aug  5 14:28:30.042: INFO: Waiting for pod pod-projected-secrets-16760f84-04b1-4565-8e96-7d3e2b5a6c27 to disappear
Aug  5 14:28:30.098: INFO: Pod pod-projected-secrets-16760f84-04b1-4565-8e96-7d3e2b5a6c27 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 14:28:30.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4086" for this suite.
Aug  5 14:28:36.119: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 14:28:36.216: INFO: namespace projected-4086 deletion completed in 6.113667855s

• [SLOW TEST:10.423 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 14:28:36.217: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 14:28:40.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-3480" for this suite.
Aug  5 14:29:30.355: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 14:29:30.441: INFO: namespace kubelet-test-3480 deletion completed in 50.130589541s

• [SLOW TEST:54.224 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 14:29:30.441: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0805 14:29:31.537027       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug  5 14:29:31.537: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 14:29:31.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-3605" for this suite.
Aug  5 14:29:37.572: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 14:29:37.677: INFO: namespace gc-3605 deletion completed in 6.136885049s

• [SLOW TEST:7.236 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 14:29:37.678: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Aug  5 14:29:37.765: INFO: Waiting up to 5m0s for pod "downward-api-d9703dd5-35d2-499b-a4f4-dd2751290b47" in namespace "downward-api-8838" to be "success or failure"
Aug  5 14:29:37.772: INFO: Pod "downward-api-d9703dd5-35d2-499b-a4f4-dd2751290b47": Phase="Pending", Reason="", readiness=false. Elapsed: 7.014723ms
Aug  5 14:29:39.776: INFO: Pod "downward-api-d9703dd5-35d2-499b-a4f4-dd2751290b47": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010502504s
Aug  5 14:29:41.780: INFO: Pod "downward-api-d9703dd5-35d2-499b-a4f4-dd2751290b47": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014893428s
STEP: Saw pod success
Aug  5 14:29:41.780: INFO: Pod "downward-api-d9703dd5-35d2-499b-a4f4-dd2751290b47" satisfied condition "success or failure"
Aug  5 14:29:41.783: INFO: Trying to get logs from node iruya-worker pod downward-api-d9703dd5-35d2-499b-a4f4-dd2751290b47 container dapi-container: 
STEP: delete the pod
Aug  5 14:29:41.816: INFO: Waiting for pod downward-api-d9703dd5-35d2-499b-a4f4-dd2751290b47 to disappear
Aug  5 14:29:41.833: INFO: Pod downward-api-d9703dd5-35d2-499b-a4f4-dd2751290b47 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 14:29:41.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8838" for this suite.
Aug  5 14:29:47.898: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 14:29:47.973: INFO: namespace downward-api-8838 deletion completed in 6.136167591s

• [SLOW TEST:10.295 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 14:29:47.973: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug  5 14:29:48.020: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 14:29:52.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6151" for this suite.
Aug  5 14:30:42.222: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 14:30:42.291: INFO: namespace pods-6151 deletion completed in 50.103281114s

• [SLOW TEST:54.318 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 14:30:42.291: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-projected-ps74
STEP: Creating a pod to test atomic-volume-subpath
Aug  5 14:30:42.368: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-ps74" in namespace "subpath-2779" to be "success or failure"
Aug  5 14:30:42.379: INFO: Pod "pod-subpath-test-projected-ps74": Phase="Pending", Reason="", readiness=false. Elapsed: 10.506963ms
Aug  5 14:30:44.383: INFO: Pod "pod-subpath-test-projected-ps74": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014863159s
Aug  5 14:30:46.388: INFO: Pod "pod-subpath-test-projected-ps74": Phase="Running", Reason="", readiness=true. Elapsed: 4.019473839s
Aug  5 14:30:48.392: INFO: Pod "pod-subpath-test-projected-ps74": Phase="Running", Reason="", readiness=true. Elapsed: 6.023885515s
Aug  5 14:30:50.396: INFO: Pod "pod-subpath-test-projected-ps74": Phase="Running", Reason="", readiness=true. Elapsed: 8.027989167s
Aug  5 14:30:52.400: INFO: Pod "pod-subpath-test-projected-ps74": Phase="Running", Reason="", readiness=true. Elapsed: 10.031639366s
Aug  5 14:30:54.404: INFO: Pod "pod-subpath-test-projected-ps74": Phase="Running", Reason="", readiness=true. Elapsed: 12.035460708s
Aug  5 14:30:56.408: INFO: Pod "pod-subpath-test-projected-ps74": Phase="Running", Reason="", readiness=true. Elapsed: 14.040065134s
Aug  5 14:30:58.413: INFO: Pod "pod-subpath-test-projected-ps74": Phase="Running", Reason="", readiness=true. Elapsed: 16.044891812s
Aug  5 14:31:00.417: INFO: Pod "pod-subpath-test-projected-ps74": Phase="Running", Reason="", readiness=true. Elapsed: 18.049343175s
Aug  5 14:31:02.421: INFO: Pod "pod-subpath-test-projected-ps74": Phase="Running", Reason="", readiness=true. Elapsed: 20.052439609s
Aug  5 14:31:04.425: INFO: Pod "pod-subpath-test-projected-ps74": Phase="Running", Reason="", readiness=true. Elapsed: 22.056553986s
Aug  5 14:31:06.429: INFO: Pod "pod-subpath-test-projected-ps74": Phase="Running", Reason="", readiness=true. Elapsed: 24.060573413s
Aug  5 14:31:08.434: INFO: Pod "pod-subpath-test-projected-ps74": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.065477077s
STEP: Saw pod success
Aug  5 14:31:08.434: INFO: Pod "pod-subpath-test-projected-ps74" satisfied condition "success or failure"
Aug  5 14:31:08.437: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-projected-ps74 container test-container-subpath-projected-ps74: 
STEP: delete the pod
Aug  5 14:31:08.456: INFO: Waiting for pod pod-subpath-test-projected-ps74 to disappear
Aug  5 14:31:08.461: INFO: Pod pod-subpath-test-projected-ps74 no longer exists
STEP: Deleting pod pod-subpath-test-projected-ps74
Aug  5 14:31:08.461: INFO: Deleting pod "pod-subpath-test-projected-ps74" in namespace "subpath-2779"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 14:31:08.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-2779" for this suite.
Aug  5 14:31:14.477: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 14:31:14.565: INFO: namespace subpath-2779 deletion completed in 6.098709645s

• [SLOW TEST:32.274 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 14:31:14.565: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-1b3878c2-cf7e-4885-94ad-9573065ee4dd
STEP: Creating a pod to test consume secrets
Aug  5 14:31:14.661: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-87fc2edc-738f-4f42-9322-6a0b8653e781" in namespace "projected-6488" to be "success or failure"
Aug  5 14:31:14.665: INFO: Pod "pod-projected-secrets-87fc2edc-738f-4f42-9322-6a0b8653e781": Phase="Pending", Reason="", readiness=false. Elapsed: 3.718602ms
Aug  5 14:31:16.669: INFO: Pod "pod-projected-secrets-87fc2edc-738f-4f42-9322-6a0b8653e781": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007778518s
Aug  5 14:31:18.672: INFO: Pod "pod-projected-secrets-87fc2edc-738f-4f42-9322-6a0b8653e781": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011143632s
STEP: Saw pod success
Aug  5 14:31:18.672: INFO: Pod "pod-projected-secrets-87fc2edc-738f-4f42-9322-6a0b8653e781" satisfied condition "success or failure"
Aug  5 14:31:18.675: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-87fc2edc-738f-4f42-9322-6a0b8653e781 container projected-secret-volume-test: 
STEP: delete the pod
Aug  5 14:31:18.697: INFO: Waiting for pod pod-projected-secrets-87fc2edc-738f-4f42-9322-6a0b8653e781 to disappear
Aug  5 14:31:18.720: INFO: Pod pod-projected-secrets-87fc2edc-738f-4f42-9322-6a0b8653e781 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 14:31:18.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6488" for this suite.
Aug  5 14:31:24.748: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 14:31:24.817: INFO: namespace projected-6488 deletion completed in 6.093322652s

• [SLOW TEST:10.252 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 14:31:24.818: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Aug  5 14:31:24.902: INFO: Waiting up to 5m0s for pod "pod-4cbe9b6e-c405-47b6-bc51-86d3a6993694" in namespace "emptydir-5646" to be "success or failure"
Aug  5 14:31:24.907: INFO: Pod "pod-4cbe9b6e-c405-47b6-bc51-86d3a6993694": Phase="Pending", Reason="", readiness=false. Elapsed: 5.14ms
Aug  5 14:31:26.911: INFO: Pod "pod-4cbe9b6e-c405-47b6-bc51-86d3a6993694": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009087425s
Aug  5 14:31:28.916: INFO: Pod "pod-4cbe9b6e-c405-47b6-bc51-86d3a6993694": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013263288s
STEP: Saw pod success
Aug  5 14:31:28.916: INFO: Pod "pod-4cbe9b6e-c405-47b6-bc51-86d3a6993694" satisfied condition "success or failure"
Aug  5 14:31:28.919: INFO: Trying to get logs from node iruya-worker2 pod pod-4cbe9b6e-c405-47b6-bc51-86d3a6993694 container test-container: 
STEP: delete the pod
Aug  5 14:31:28.953: INFO: Waiting for pod pod-4cbe9b6e-c405-47b6-bc51-86d3a6993694 to disappear
Aug  5 14:31:28.955: INFO: Pod pod-4cbe9b6e-c405-47b6-bc51-86d3a6993694 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 14:31:28.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5646" for this suite.
Aug  5 14:31:34.970: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 14:31:35.042: INFO: namespace emptydir-5646 deletion completed in 6.083114722s

• [SLOW TEST:10.224 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 14:31:35.042: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-rzlrr in namespace proxy-4712
I0805 14:31:35.160112       6 runners.go:180] Created replication controller with name: proxy-service-rzlrr, namespace: proxy-4712, replica count: 1
I0805 14:31:36.210511       6 runners.go:180] proxy-service-rzlrr Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0805 14:31:37.210758       6 runners.go:180] proxy-service-rzlrr Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0805 14:31:38.210987       6 runners.go:180] proxy-service-rzlrr Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0805 14:31:39.211216       6 runners.go:180] proxy-service-rzlrr Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Aug  5 14:31:39.215: INFO: setup took 4.115177581s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Aug  5 14:31:39.221: INFO: (0) /api/v1/namespaces/proxy-4712/pods/proxy-service-rzlrr-2sslc/proxy/: test (200; 6.271044ms)
Aug  5 14:31:39.221: INFO: (0) /api/v1/namespaces/proxy-4712/pods/proxy-service-rzlrr-2sslc:160/proxy/: foo (200; 6.476207ms)
Aug  5 14:31:39.221: INFO: (0) /api/v1/namespaces/proxy-4712/pods/proxy-service-rzlrr-2sslc:162/proxy/: bar (200; 6.626102ms)
Aug  5 14:31:39.221: INFO: (0) /api/v1/namespaces/proxy-4712/pods/http:proxy-service-rzlrr-2sslc:1080/proxy/: ... (200; 6.608825ms)
Aug  5 14:31:39.221: INFO: (0) /api/v1/namespaces/proxy-4712/pods/http:proxy-service-rzlrr-2sslc:162/proxy/: bar (200; 6.818093ms)
Aug  5 14:31:39.222: INFO: (0) /api/v1/namespaces/proxy-4712/services/proxy-service-rzlrr:portname1/proxy/: foo (200; 6.764942ms)
Aug  5 14:31:39.222: INFO: (0) /api/v1/namespaces/proxy-4712/pods/proxy-service-rzlrr-2sslc:1080/proxy/: test<... (200; 7.520013ms)
Aug  5 14:31:39.223: INFO: (0) /api/v1/namespaces/proxy-4712/pods/http:proxy-service-rzlrr-2sslc:160/proxy/: foo (200; 7.903339ms)
Aug  5 14:31:39.228: INFO: (0) /api/v1/namespaces/proxy-4712/services/proxy-service-rzlrr:portname2/proxy/: bar (200; 13.336054ms)
Aug  5 14:31:39.228: INFO: (0) /api/v1/namespaces/proxy-4712/services/http:proxy-service-rzlrr:portname2/proxy/: bar (200; 13.378048ms)
Aug  5 14:31:39.228: INFO: (0) /api/v1/namespaces/proxy-4712/services/http:proxy-service-rzlrr:portname1/proxy/: foo (200; 13.513617ms)
Aug  5 14:31:39.229: INFO: (0) /api/v1/namespaces/proxy-4712/services/https:proxy-service-rzlrr:tlsportname2/proxy/: tls qux (200; 14.627391ms)
Aug  5 14:31:39.230: INFO: (0) /api/v1/namespaces/proxy-4712/pods/https:proxy-service-rzlrr-2sslc:460/proxy/: tls baz (200; 14.918746ms)
Aug  5 14:31:39.230: INFO: (0) /api/v1/namespaces/proxy-4712/pods/https:proxy-service-rzlrr-2sslc:443/proxy/: ... (200; 5.044824ms)
Aug  5 14:31:39.239: INFO: (1) /api/v1/namespaces/proxy-4712/services/https:proxy-service-rzlrr:tlsportname2/proxy/: tls qux (200; 5.244848ms)
Aug  5 14:31:39.239: INFO: (1) /api/v1/namespaces/proxy-4712/pods/proxy-service-rzlrr-2sslc:1080/proxy/: test<... (200; 5.210591ms)
Aug  5 14:31:39.239: INFO: (1) /api/v1/namespaces/proxy-4712/pods/proxy-service-rzlrr-2sslc:162/proxy/: bar (200; 5.270133ms)
Aug  5 14:31:39.239: INFO: (1) /api/v1/namespaces/proxy-4712/services/proxy-service-rzlrr:portname1/proxy/: foo (200; 5.318733ms)
Aug  5 14:31:39.239: INFO: (1) /api/v1/namespaces/proxy-4712/services/http:proxy-service-rzlrr:portname2/proxy/: bar (200; 5.429251ms)
Aug  5 14:31:39.239: INFO: (1) /api/v1/namespaces/proxy-4712/pods/https:proxy-service-rzlrr-2sslc:462/proxy/: tls qux (200; 5.529952ms)
Aug  5 14:31:39.239: INFO: (1) /api/v1/namespaces/proxy-4712/pods/proxy-service-rzlrr-2sslc/proxy/: test (200; 5.548346ms)
Aug  5 14:31:39.240: INFO: (1) /api/v1/namespaces/proxy-4712/services/https:proxy-service-rzlrr:tlsportname1/proxy/: tls baz (200; 6.003104ms)
Aug  5 14:31:39.243: INFO: (2) /api/v1/namespaces/proxy-4712/pods/proxy-service-rzlrr-2sslc:160/proxy/: foo (200; 2.93128ms)
Aug  5 14:31:39.243: INFO: (2) /api/v1/namespaces/proxy-4712/pods/proxy-service-rzlrr-2sslc/proxy/: test (200; 3.173458ms)
Aug  5 14:31:39.244: INFO: (2) /api/v1/namespaces/proxy-4712/pods/proxy-service-rzlrr-2sslc:1080/proxy/: test<... (200; 3.908381ms)
Aug  5 14:31:39.244: INFO: (2) /api/v1/namespaces/proxy-4712/pods/http:proxy-service-rzlrr-2sslc:162/proxy/: bar (200; 4.126962ms)
Aug  5 14:31:39.244: INFO: (2) /api/v1/namespaces/proxy-4712/pods/https:proxy-service-rzlrr-2sslc:443/proxy/: ... (200; 5.518497ms)
Aug  5 14:31:39.245: INFO: (2) /api/v1/namespaces/proxy-4712/pods/https:proxy-service-rzlrr-2sslc:462/proxy/: tls qux (200; 5.405437ms)
Aug  5 14:31:39.245: INFO: (2) /api/v1/namespaces/proxy-4712/pods/https:proxy-service-rzlrr-2sslc:460/proxy/: tls baz (200; 5.396647ms)
Aug  5 14:31:39.245: INFO: (2) /api/v1/namespaces/proxy-4712/services/https:proxy-service-rzlrr:tlsportname2/proxy/: tls qux (200; 5.543089ms)
Aug  5 14:31:39.245: INFO: (2) /api/v1/namespaces/proxy-4712/pods/http:proxy-service-rzlrr-2sslc:160/proxy/: foo (200; 5.473937ms)
Aug  5 14:31:39.245: INFO: (2) /api/v1/namespaces/proxy-4712/pods/proxy-service-rzlrr-2sslc:162/proxy/: bar (200; 5.518564ms)
Aug  5 14:31:39.249: INFO: (3) /api/v1/namespaces/proxy-4712/pods/proxy-service-rzlrr-2sslc:162/proxy/: bar (200; 3.373803ms)
Aug  5 14:31:39.249: INFO: (3) /api/v1/namespaces/proxy-4712/pods/http:proxy-service-rzlrr-2sslc:162/proxy/: bar (200; 3.830953ms)
Aug  5 14:31:39.250: INFO: (3) /api/v1/namespaces/proxy-4712/services/proxy-service-rzlrr:portname1/proxy/: foo (200; 4.064254ms)
Aug  5 14:31:39.250: INFO: (3) /api/v1/namespaces/proxy-4712/pods/proxy-service-rzlrr-2sslc:160/proxy/: foo (200; 4.166227ms)
Aug  5 14:31:39.250: INFO: (3) /api/v1/namespaces/proxy-4712/pods/proxy-service-rzlrr-2sslc/proxy/: test (200; 4.099154ms)
Aug  5 14:31:39.250: INFO: (3) /api/v1/namespaces/proxy-4712/pods/https:proxy-service-rzlrr-2sslc:462/proxy/: tls qux (200; 4.12085ms)
Aug  5 14:31:39.250: INFO: (3) /api/v1/namespaces/proxy-4712/pods/http:proxy-service-rzlrr-2sslc:160/proxy/: foo (200; 4.182753ms)
Aug  5 14:31:39.250: INFO: (3) /api/v1/namespaces/proxy-4712/services/http:proxy-service-rzlrr:portname2/proxy/: bar (200; 4.386651ms)
Aug  5 14:31:39.250: INFO: (3) /api/v1/namespaces/proxy-4712/pods/https:proxy-service-rzlrr-2sslc:443/proxy/: ... (200; 4.470567ms)
Aug  5 14:31:39.250: INFO: (3) /api/v1/namespaces/proxy-4712/pods/proxy-service-rzlrr-2sslc:1080/proxy/: test<... (200; 4.445154ms)
Aug  5 14:31:39.250: INFO: (3) /api/v1/namespaces/proxy-4712/pods/https:proxy-service-rzlrr-2sslc:460/proxy/: tls baz (200; 4.789763ms)
Aug  5 14:31:39.251: INFO: (3) /api/v1/namespaces/proxy-4712/services/https:proxy-service-rzlrr:tlsportname2/proxy/: tls qux (200; 5.756872ms)
Aug  5 14:31:39.251: INFO: (3) /api/v1/namespaces/proxy-4712/services/http:proxy-service-rzlrr:portname1/proxy/: foo (200; 5.800967ms)
Aug  5 14:31:39.251: INFO: (3) /api/v1/namespaces/proxy-4712/services/https:proxy-service-rzlrr:tlsportname1/proxy/: tls baz (200; 5.76909ms)
Aug  5 14:31:39.251: INFO: (3) /api/v1/namespaces/proxy-4712/services/proxy-service-rzlrr:portname2/proxy/: bar (200; 5.79064ms)
Aug  5 14:31:39.255: INFO: (4) /api/v1/namespaces/proxy-4712/pods/proxy-service-rzlrr-2sslc/proxy/: test (200; 3.165254ms)
Aug  5 14:31:39.255: INFO: (4) /api/v1/namespaces/proxy-4712/pods/proxy-service-rzlrr-2sslc:162/proxy/: bar (200; 3.711216ms)
Aug  5 14:31:39.256: INFO: (4) /api/v1/namespaces/proxy-4712/pods/http:proxy-service-rzlrr-2sslc:162/proxy/: bar (200; 4.163657ms)
Aug  5 14:31:39.256: INFO: (4) /api/v1/namespaces/proxy-4712/services/http:proxy-service-rzlrr:portname1/proxy/: foo (200; 4.557126ms)
Aug  5 14:31:39.256: INFO: (4) /api/v1/namespaces/proxy-4712/services/https:proxy-service-rzlrr:tlsportname2/proxy/: tls qux (200; 4.860841ms)
Aug  5 14:31:39.257: INFO: (4) /api/v1/namespaces/proxy-4712/pods/http:proxy-service-rzlrr-2sslc:1080/proxy/: ... (200; 5.13429ms)
Aug  5 14:31:39.257: INFO: (4) /api/v1/namespaces/proxy-4712/pods/proxy-service-rzlrr-2sslc:160/proxy/: foo (200; 5.142979ms)
Aug  5 14:31:39.257: INFO: (4) /api/v1/namespaces/proxy-4712/pods/https:proxy-service-rzlrr-2sslc:443/proxy/: test<... (200; 5.157266ms)
Aug  5 14:31:39.257: INFO: (4) /api/v1/namespaces/proxy-4712/services/proxy-service-rzlrr:portname2/proxy/: bar (200; 5.168089ms)
Aug  5 14:31:39.257: INFO: (4) /api/v1/namespaces/proxy-4712/services/https:proxy-service-rzlrr:tlsportname1/proxy/: tls baz (200; 5.26026ms)
Aug  5 14:31:39.257: INFO: (4) /api/v1/namespaces/proxy-4712/pods/http:proxy-service-rzlrr-2sslc:160/proxy/: foo (200; 5.221049ms)
Aug  5 14:31:39.257: INFO: (4) /api/v1/namespaces/proxy-4712/services/http:proxy-service-rzlrr:portname2/proxy/: bar (200; 5.208399ms)
Aug  5 14:31:39.257: INFO: (4) /api/v1/namespaces/proxy-4712/pods/https:proxy-service-rzlrr-2sslc:460/proxy/: tls baz (200; 5.216733ms)
Aug  5 14:31:39.257: INFO: (4) /api/v1/namespaces/proxy-4712/pods/https:proxy-service-rzlrr-2sslc:462/proxy/: tls qux (200; 5.247976ms)
Aug  5 14:31:39.257: INFO: (4) /api/v1/namespaces/proxy-4712/services/proxy-service-rzlrr:portname1/proxy/: foo (200; 5.27869ms)
Aug  5 14:31:39.260: INFO: (5) /api/v1/namespaces/proxy-4712/pods/http:proxy-service-rzlrr-2sslc:162/proxy/: bar (200; 3.094726ms)
Aug  5 14:31:39.260: INFO: (5) /api/v1/namespaces/proxy-4712/pods/proxy-service-rzlrr-2sslc:162/proxy/: bar (200; 3.080276ms)
Aug  5 14:31:39.260: INFO: (5) /api/v1/namespaces/proxy-4712/pods/https:proxy-service-rzlrr-2sslc:462/proxy/: tls qux (200; 3.228073ms)
Aug  5 14:31:39.260: INFO: (5) /api/v1/namespaces/proxy-4712/pods/proxy-service-rzlrr-2sslc/proxy/: test (200; 3.28773ms)
Aug  5 14:31:39.261: INFO: (5) /api/v1/namespaces/proxy-4712/pods/http:proxy-service-rzlrr-2sslc:1080/proxy/: ... (200; 4.258436ms)
Aug  5 14:31:39.261: INFO: (5) /api/v1/namespaces/proxy-4712/services/http:proxy-service-rzlrr:portname2/proxy/: bar (200; 4.274211ms)
Aug  5 14:31:39.261: INFO: (5) /api/v1/namespaces/proxy-4712/pods/proxy-service-rzlrr-2sslc:160/proxy/: foo (200; 4.313067ms)
Aug  5 14:31:39.261: INFO: (5) /api/v1/namespaces/proxy-4712/pods/http:proxy-service-rzlrr-2sslc:160/proxy/: foo (200; 4.370514ms)
Aug  5 14:31:39.261: INFO: (5) /api/v1/namespaces/proxy-4712/services/proxy-service-rzlrr:portname2/proxy/: bar (200; 4.363043ms)
Aug  5 14:31:39.261: INFO: (5) /api/v1/namespaces/proxy-4712/pods/proxy-service-rzlrr-2sslc:1080/proxy/: test<... (200; 4.555948ms)
Aug  5 14:31:39.261: INFO: (5) /api/v1/namespaces/proxy-4712/pods/https:proxy-service-rzlrr-2sslc:460/proxy/: tls baz (200; 4.61761ms)
Aug  5 14:31:39.261: INFO: (5) /api/v1/namespaces/proxy-4712/services/http:proxy-service-rzlrr:portname1/proxy/: foo (200; 4.574388ms)
Aug  5 14:31:39.262: INFO: (5) /api/v1/namespaces/proxy-4712/services/proxy-service-rzlrr:portname1/proxy/: foo (200; 4.834079ms)
Aug  5 14:31:39.262: INFO: (5) /api/v1/namespaces/proxy-4712/services/https:proxy-service-rzlrr:tlsportname1/proxy/: tls baz (200; 4.785696ms)
Aug  5 14:31:39.262: INFO: (5) /api/v1/namespaces/proxy-4712/services/https:proxy-service-rzlrr:tlsportname2/proxy/: tls qux (200; 4.828002ms)
Aug  5 14:31:39.262: INFO: (5) /api/v1/namespaces/proxy-4712/pods/https:proxy-service-rzlrr-2sslc:443/proxy/: test<... (200; 3.364849ms)
Aug  5 14:31:39.265: INFO: (6) /api/v1/namespaces/proxy-4712/pods/http:proxy-service-rzlrr-2sslc:162/proxy/: bar (200; 3.394093ms)
Aug  5 14:31:39.265: INFO: (6) /api/v1/namespaces/proxy-4712/pods/https:proxy-service-rzlrr-2sslc:443/proxy/: test (200; 3.735093ms)
Aug  5 14:31:39.266: INFO: (6) /api/v1/namespaces/proxy-4712/pods/http:proxy-service-rzlrr-2sslc:1080/proxy/: ... (200; 3.863611ms)
Aug  5 14:31:39.266: INFO: (6) /api/v1/namespaces/proxy-4712/pods/https:proxy-service-rzlrr-2sslc:462/proxy/: tls qux (200; 4.092557ms)
Aug  5 14:31:39.266: INFO: (6) /api/v1/namespaces/proxy-4712/services/https:proxy-service-rzlrr:tlsportname1/proxy/: tls baz (200; 4.104894ms)
Aug  5 14:31:39.266: INFO: (6) /api/v1/namespaces/proxy-4712/pods/http:proxy-service-rzlrr-2sslc:160/proxy/: foo (200; 4.175519ms)
Aug  5 14:31:39.266: INFO: (6) /api/v1/namespaces/proxy-4712/pods/proxy-service-rzlrr-2sslc:162/proxy/: bar (200; 4.016289ms)
Aug  5 14:31:39.266: INFO: (6) /api/v1/namespaces/proxy-4712/services/proxy-service-rzlrr:portname1/proxy/: foo (200; 4.118254ms)
Aug  5 14:31:39.266: INFO: (6) /api/v1/namespaces/proxy-4712/services/https:proxy-service-rzlrr:tlsportname2/proxy/: tls qux (200; 4.07881ms)
Aug  5 14:31:39.266: INFO: (6) /api/v1/namespaces/proxy-4712/services/http:proxy-service-rzlrr:portname2/proxy/: bar (200; 4.439515ms)
Aug  5 14:31:39.266: INFO: (6) /api/v1/namespaces/proxy-4712/services/http:proxy-service-rzlrr:portname1/proxy/: foo (200; 4.543863ms)
Aug  5 14:31:39.269: INFO: (7) /api/v1/namespaces/proxy-4712/pods/http:proxy-service-rzlrr-2sslc:1080/proxy/: ... (200; 2.149891ms)
Aug  5 14:31:39.269: INFO: (7) /api/v1/namespaces/proxy-4712/pods/proxy-service-rzlrr-2sslc:1080/proxy/: test<... (200; 2.527922ms)
Aug  5 14:31:39.270: INFO: (7) /api/v1/namespaces/proxy-4712/pods/https:proxy-service-rzlrr-2sslc:462/proxy/: tls qux (200; 2.912115ms)
Aug  5 14:31:39.270: INFO: (7) /api/v1/namespaces/proxy-4712/pods/proxy-service-rzlrr-2sslc:160/proxy/: foo (200; 3.310931ms)
Aug  5 14:31:39.270: INFO: (7) /api/v1/namespaces/proxy-4712/pods/http:proxy-service-rzlrr-2sslc:160/proxy/: foo (200; 3.355233ms)
Aug  5 14:31:39.270: INFO: (7) /api/v1/namespaces/proxy-4712/pods/proxy-service-rzlrr-2sslc/proxy/: test (200; 3.423905ms)
Aug  5 14:31:39.270: INFO: (7) /api/v1/namespaces/proxy-4712/pods/proxy-service-rzlrr-2sslc:162/proxy/: bar (200; 3.466718ms)
Aug  5 14:31:39.270: INFO: (7) /api/v1/namespaces/proxy-4712/pods/https:proxy-service-rzlrr-2sslc:460/proxy/: tls baz (200; 3.433972ms)
Aug  5 14:31:39.270: INFO: (7) /api/v1/namespaces/proxy-4712/pods/http:proxy-service-rzlrr-2sslc:162/proxy/: bar (200; 3.425491ms)
Aug  5 14:31:39.270: INFO: (7) /api/v1/namespaces/proxy-4712/pods/https:proxy-service-rzlrr-2sslc:443/proxy/: ... (200; 3.411293ms)
Aug  5 14:31:39.275: INFO: (8) /api/v1/namespaces/proxy-4712/pods/http:proxy-service-rzlrr-2sslc:162/proxy/: bar (200; 3.364365ms)
Aug  5 14:31:39.275: INFO: (8) /api/v1/namespaces/proxy-4712/pods/https:proxy-service-rzlrr-2sslc:462/proxy/: tls qux (200; 3.562164ms)
Aug  5 14:31:39.275: INFO: (8) /api/v1/namespaces/proxy-4712/pods/proxy-service-rzlrr-2sslc:160/proxy/: foo (200; 3.577116ms)
Aug  5 14:31:39.275: INFO: (8) /api/v1/namespaces/proxy-4712/pods/proxy-service-rzlrr-2sslc/proxy/: test (200; 3.553482ms)
Aug  5 14:31:39.275: INFO: (8) /api/v1/namespaces/proxy-4712/pods/proxy-service-rzlrr-2sslc:162/proxy/: bar (200; 3.616924ms)
Aug  5 14:31:39.275: INFO: (8) /api/v1/namespaces/proxy-4712/pods/proxy-service-rzlrr-2sslc:1080/proxy/: test<... (200; 4.051286ms)
Aug  5 14:31:39.276: INFO: (8) /api/v1/namespaces/proxy-4712/services/proxy-service-rzlrr:portname2/proxy/: bar (200; 4.604189ms)
Aug  5 14:31:39.276: INFO: (8) /api/v1/namespaces/proxy-4712/services/https:proxy-service-rzlrr:tlsportname1/proxy/: tls baz (200; 4.588048ms)
Aug  5 14:31:39.276: INFO: (8) /api/v1/namespaces/proxy-4712/services/proxy-service-rzlrr:portname1/proxy/: foo (200; 4.623784ms)
Aug  5 14:31:39.276: INFO: (8) /api/v1/namespaces/proxy-4712/services/http:proxy-service-rzlrr:portname2/proxy/: bar (200; 4.781303ms)
Aug  5 14:31:39.276: INFO: (8) /api/v1/namespaces/proxy-4712/services/http:proxy-service-rzlrr:portname1/proxy/: foo (200; 4.664334ms)
Aug  5 14:31:39.276: INFO: (8) /api/v1/namespaces/proxy-4712/services/https:proxy-service-rzlrr:tlsportname2/proxy/: tls qux (200; 4.65966ms)
Aug  5 14:31:39.279: INFO: (9) /api/v1/namespaces/proxy-4712/pods/proxy-service-rzlrr-2sslc:162/proxy/: bar (200; 3.03828ms)
Aug  5 14:31:39.279: INFO: (9) /api/v1/namespaces/proxy-4712/pods/https:proxy-service-rzlrr-2sslc:460/proxy/: tls baz (200; 3.06519ms)
Aug  5 14:31:39.280: INFO: (9) /api/v1/namespaces/proxy-4712/pods/http:proxy-service-rzlrr-2sslc:1080/proxy/: ... (200; 3.307364ms)
Aug  5 14:31:39.280: INFO: (9) /api/v1/namespaces/proxy-4712/pods/http:proxy-service-rzlrr-2sslc:162/proxy/: bar (200; 3.288343ms)
Aug  5 14:31:39.280: INFO: (9) /api/v1/namespaces/proxy-4712/pods/proxy-service-rzlrr-2sslc:1080/proxy/: test<... (200; 3.455441ms)
Aug  5 14:31:39.280: INFO: (9) /api/v1/namespaces/proxy-4712/pods/http:proxy-service-rzlrr-2sslc:160/proxy/: foo (200; 3.753626ms)
Aug  5 14:31:39.280: INFO: (9) /api/v1/namespaces/proxy-4712/pods/https:proxy-service-rzlrr-2sslc:443/proxy/: test (200; 4.131021ms)
Aug  5 14:31:39.280: INFO: (9) /api/v1/namespaces/proxy-4712/pods/proxy-service-rzlrr-2sslc:160/proxy/: foo (200; 4.238891ms)
Aug  5 14:31:39.281: INFO: (9) /api/v1/namespaces/proxy-4712/services/proxy-service-rzlrr:portname2/proxy/: bar (200; 4.259065ms)
Aug  5 14:31:39.281: INFO: (9) /api/v1/namespaces/proxy-4712/pods/https:proxy-service-rzlrr-2sslc:462/proxy/: tls qux (200; 4.242825ms)
Aug  5 14:31:39.281: INFO: (9) /api/v1/namespaces/proxy-4712/services/http:proxy-service-rzlrr:portname2/proxy/: bar (200; 5.110522ms)
Aug  5 14:31:39.281: INFO: (9) /api/v1/namespaces/proxy-4712/services/http:proxy-service-rzlrr:portname1/proxy/: foo (200; 5.015191ms)
Aug  5 14:31:39.281: INFO: (9) /api/v1/namespaces/proxy-4712/services/https:proxy-service-rzlrr:tlsportname2/proxy/: tls qux (200; 5.01009ms)
Aug  5 14:31:39.281: INFO: (9) /api/v1/namespaces/proxy-4712/services/proxy-service-rzlrr:portname1/proxy/: foo (200; 5.175407ms)
Aug  5 14:31:39.281: INFO: (9) /api/v1/namespaces/proxy-4712/services/https:proxy-service-rzlrr:tlsportname1/proxy/: tls baz (200; 5.061987ms)
Aug  5 14:31:39.285: INFO: (10) /api/v1/namespaces/proxy-4712/services/http:proxy-service-rzlrr:portname2/proxy/: bar (200; 3.985985ms)
Aug  5 14:31:39.286: INFO: (10) /api/v1/namespaces/proxy-4712/services/http:proxy-service-rzlrr:portname1/proxy/: foo (200; 4.663733ms)
Aug  5 14:31:39.286: INFO: (10) /api/v1/namespaces/proxy-4712/services/https:proxy-service-rzlrr:tlsportname2/proxy/: tls qux (200; 4.696019ms)
Aug  5 14:31:39.286: INFO: (10) /api/v1/namespaces/proxy-4712/pods/http:proxy-service-rzlrr-2sslc:1080/proxy/: ... (200; 4.723491ms)
Aug  5 14:31:39.287: INFO: (10) /api/v1/namespaces/proxy-4712/services/proxy-service-rzlrr:portname1/proxy/: foo (200; 5.11187ms)
Aug  5 14:31:39.287: INFO: (10) /api/v1/namespaces/proxy-4712/pods/proxy-service-rzlrr-2sslc:162/proxy/: bar (200; 5.380543ms)
Aug  5 14:31:39.287: INFO: (10) /api/v1/namespaces/proxy-4712/pods/https:proxy-service-rzlrr-2sslc:443/proxy/: test (200; 5.674283ms)
Aug  5 14:31:39.287: INFO: (10) /api/v1/namespaces/proxy-4712/pods/http:proxy-service-rzlrr-2sslc:162/proxy/: bar (200; 5.654013ms)
Aug  5 14:31:39.287: INFO: (10) /api/v1/namespaces/proxy-4712/pods/proxy-service-rzlrr-2sslc:1080/proxy/: test<... (200; 5.770565ms)
Aug  5 14:31:39.287: INFO: (10) /api/v1/namespaces/proxy-4712/services/https:proxy-service-rzlrr:tlsportname1/proxy/: tls baz (200; 5.750303ms)
Aug  5 14:31:39.287: INFO: (10) /api/v1/namespaces/proxy-4712/pods/https:proxy-service-rzlrr-2sslc:462/proxy/: tls qux (200; 5.796036ms)
Aug  5 14:31:39.287: INFO: (10) /api/v1/namespaces/proxy-4712/pods/proxy-service-rzlrr-2sslc:160/proxy/: foo (200; 5.709723ms)
Aug  5 14:31:39.287: INFO: (10) /api/v1/namespaces/proxy-4712/pods/http:proxy-service-rzlrr-2sslc:160/proxy/: foo (200; 5.715373ms)
Aug  5 14:31:39.291: INFO: (11) /api/v1/namespaces/proxy-4712/services/http:proxy-service-rzlrr:portname1/proxy/: foo (200; 3.318507ms)
Aug  5 14:31:39.291: INFO: (11) /api/v1/namespaces/proxy-4712/pods/proxy-service-rzlrr-2sslc/proxy/: test (200; 3.499928ms)
Aug  5 14:31:39.291: INFO: (11) /api/v1/namespaces/proxy-4712/pods/http:proxy-service-rzlrr-2sslc:1080/proxy/: ... (200; 3.503225ms)
Aug  5 14:31:39.291: INFO: (11) /api/v1/namespaces/proxy-4712/pods/proxy-service-rzlrr-2sslc:162/proxy/: bar (200; 3.447361ms)
Aug  5 14:31:39.291: INFO: (11) /api/v1/namespaces/proxy-4712/pods/https:proxy-service-rzlrr-2sslc:460/proxy/: tls baz (200; 3.531843ms)
Aug  5 14:31:39.292: INFO: (11) /api/v1/namespaces/proxy-4712/pods/proxy-service-rzlrr-2sslc:160/proxy/: foo (200; 4.430169ms)
Aug  5 14:31:39.292: INFO: (11) /api/v1/namespaces/proxy-4712/services/http:proxy-service-rzlrr:portname2/proxy/: bar (200; 4.641989ms)
Aug  5 14:31:39.292: INFO: (11) /api/v1/namespaces/proxy-4712/pods/http:proxy-service-rzlrr-2sslc:162/proxy/: bar (200; 4.58729ms)
Aug  5 14:31:39.292: INFO: (11) /api/v1/namespaces/proxy-4712/services/proxy-service-rzlrr:portname1/proxy/: foo (200; 4.495414ms)
Aug  5 14:31:39.292: INFO: (11) /api/v1/namespaces/proxy-4712/pods/https:proxy-service-rzlrr-2sslc:443/proxy/: test<... (200; 5.377487ms)
Aug  5 14:31:39.293: INFO: (11) /api/v1/namespaces/proxy-4712/services/https:proxy-service-rzlrr:tlsportname2/proxy/: tls qux (200; 5.393904ms)
Aug  5 14:31:39.297: INFO: (12) /api/v1/namespaces/proxy-4712/pods/http:proxy-service-rzlrr-2sslc:1080/proxy/: ... (200; 3.659124ms)
Aug  5 14:31:39.297: INFO: (12) /api/v1/namespaces/proxy-4712/services/https:proxy-service-rzlrr:tlsportname1/proxy/: tls baz (200; 4.038093ms)
Aug  5 14:31:39.297: INFO: (12) /api/v1/namespaces/proxy-4712/services/http:proxy-service-rzlrr:portname1/proxy/: foo (200; 4.164457ms)
Aug  5 14:31:39.297: INFO: (12) /api/v1/namespaces/proxy-4712/pods/http:proxy-service-rzlrr-2sslc:162/proxy/: bar (200; 4.235614ms)
Aug  5 14:31:39.297: INFO: (12) /api/v1/namespaces/proxy-4712/services/proxy-service-rzlrr:portname2/proxy/: bar (200; 4.205845ms)
Aug  5 14:31:39.297: INFO: (12) /api/v1/namespaces/proxy-4712/services/https:proxy-service-rzlrr:tlsportname2/proxy/: tls qux (200; 4.381656ms)
Aug  5 14:31:39.297: INFO: (12) /api/v1/namespaces/proxy-4712/pods/http:proxy-service-rzlrr-2sslc:160/proxy/: foo (200; 4.388183ms)
Aug  5 14:31:39.298: INFO: (12) /api/v1/namespaces/proxy-4712/services/http:proxy-service-rzlrr:portname2/proxy/: bar (200; 4.811141ms)
Aug  5 14:31:39.298: INFO: (12) /api/v1/namespaces/proxy-4712/pods/proxy-service-rzlrr-2sslc/proxy/: test (200; 4.799421ms)
Aug  5 14:31:39.298: INFO: (12) /api/v1/namespaces/proxy-4712/pods/proxy-service-rzlrr-2sslc:1080/proxy/: test<... (200; 4.887365ms)
Aug  5 14:31:39.298: INFO: (12) /api/v1/namespaces/proxy-4712/pods/proxy-service-rzlrr-2sslc:160/proxy/: foo (200; 4.841922ms)
Aug  5 14:31:39.298: INFO: (12) /api/v1/namespaces/proxy-4712/pods/https:proxy-service-rzlrr-2sslc:443/proxy/: test (200; 3.438162ms)
Aug  5 14:31:39.301: INFO: (13) /api/v1/namespaces/proxy-4712/pods/http:proxy-service-rzlrr-2sslc:1080/proxy/: ... (200; 3.455192ms)
Aug  5 14:31:39.301: INFO: (13) /api/v1/namespaces/proxy-4712/pods/http:proxy-service-rzlrr-2sslc:160/proxy/: foo (200; 3.522312ms)
Aug  5 14:31:39.301: INFO: (13) /api/v1/namespaces/proxy-4712/pods/http:proxy-service-rzlrr-2sslc:162/proxy/: bar (200; 3.477801ms)
Aug  5 14:31:39.301: INFO: (13) /api/v1/namespaces/proxy-4712/pods/https:proxy-service-rzlrr-2sslc:462/proxy/: tls qux (200; 3.47971ms)
Aug  5 14:31:39.302: INFO: (13) /api/v1/namespaces/proxy-4712/pods/https:proxy-service-rzlrr-2sslc:443/proxy/: test<... (200; 4.380839ms)
Aug  5 14:31:39.303: INFO: (13) /api/v1/namespaces/proxy-4712/services/proxy-service-rzlrr:portname2/proxy/: bar (200; 4.901324ms)
Aug  5 14:31:39.303: INFO: (13) /api/v1/namespaces/proxy-4712/services/https:proxy-service-rzlrr:tlsportname2/proxy/: tls qux (200; 4.974978ms)
Aug  5 14:31:39.305: INFO: (14) /api/v1/namespaces/proxy-4712/pods/https:proxy-service-rzlrr-2sslc:460/proxy/: tls baz (200; 2.231852ms)
Aug  5 14:31:39.306: INFO: (14) /api/v1/namespaces/proxy-4712/pods/http:proxy-service-rzlrr-2sslc:162/proxy/: bar (200; 2.915665ms)
Aug  5 14:31:39.306: INFO: (14) /api/v1/namespaces/proxy-4712/pods/http:proxy-service-rzlrr-2sslc:1080/proxy/: ... (200; 3.054791ms)
Aug  5 14:31:39.306: INFO: (14) /api/v1/namespaces/proxy-4712/pods/proxy-service-rzlrr-2sslc:160/proxy/: foo (200; 3.262394ms)
Aug  5 14:31:39.306: INFO: (14) /api/v1/namespaces/proxy-4712/pods/https:proxy-service-rzlrr-2sslc:462/proxy/: tls qux (200; 3.452442ms)
Aug  5 14:31:39.307: INFO: (14) /api/v1/namespaces/proxy-4712/pods/proxy-service-rzlrr-2sslc:1080/proxy/: test<... (200; 3.389891ms)
Aug  5 14:31:39.307: INFO: (14) /api/v1/namespaces/proxy-4712/pods/proxy-service-rzlrr-2sslc:162/proxy/: bar (200; 3.4081ms)
Aug  5 14:31:39.307: INFO: (14) /api/v1/namespaces/proxy-4712/pods/proxy-service-rzlrr-2sslc/proxy/: test (200; 3.466049ms)
Aug  5 14:31:39.307: INFO: (14) /api/v1/namespaces/proxy-4712/pods/https:proxy-service-rzlrr-2sslc:443/proxy/: test<... (200; 5.426252ms)
Aug  5 14:31:39.314: INFO: (15) /api/v1/namespaces/proxy-4712/services/proxy-service-rzlrr:portname2/proxy/: bar (200; 5.511858ms)
Aug  5 14:31:39.314: INFO: (15) /api/v1/namespaces/proxy-4712/pods/proxy-service-rzlrr-2sslc:162/proxy/: bar (200; 5.449234ms)
Aug  5 14:31:39.314: INFO: (15) /api/v1/namespaces/proxy-4712/services/https:proxy-service-rzlrr:tlsportname1/proxy/: tls baz (200; 5.453567ms)
Aug  5 14:31:39.314: INFO: (15) /api/v1/namespaces/proxy-4712/pods/proxy-service-rzlrr-2sslc:160/proxy/: foo (200; 5.532861ms)
Aug  5 14:31:39.314: INFO: (15) /api/v1/namespaces/proxy-4712/pods/http:proxy-service-rzlrr-2sslc:1080/proxy/: ... (200; 5.559184ms)
Aug  5 14:31:39.314: INFO: (15) /api/v1/namespaces/proxy-4712/pods/http:proxy-service-rzlrr-2sslc:162/proxy/: bar (200; 5.513358ms)
Aug  5 14:31:39.314: INFO: (15) /api/v1/namespaces/proxy-4712/services/http:proxy-service-rzlrr:portname2/proxy/: bar (200; 5.591608ms)
Aug  5 14:31:39.314: INFO: (15) /api/v1/namespaces/proxy-4712/services/proxy-service-rzlrr:portname1/proxy/: foo (200; 5.519689ms)
Aug  5 14:31:39.314: INFO: (15) /api/v1/namespaces/proxy-4712/pods/proxy-service-rzlrr-2sslc/proxy/: test (200; 5.500642ms)
Aug  5 14:31:39.314: INFO: (15) /api/v1/namespaces/proxy-4712/pods/https:proxy-service-rzlrr-2sslc:443/proxy/: test (200; 3.799941ms)
Aug  5 14:31:39.318: INFO: (16) /api/v1/namespaces/proxy-4712/pods/https:proxy-service-rzlrr-2sslc:443/proxy/: ... (200; 3.915994ms)
Aug  5 14:31:39.318: INFO: (16) /api/v1/namespaces/proxy-4712/pods/proxy-service-rzlrr-2sslc:1080/proxy/: test<... (200; 3.901363ms)
Aug  5 14:31:39.319: INFO: (16) /api/v1/namespaces/proxy-4712/services/http:proxy-service-rzlrr:portname1/proxy/: foo (200; 4.990597ms)
Aug  5 14:31:39.319: INFO: (16) /api/v1/namespaces/proxy-4712/services/proxy-service-rzlrr:portname2/proxy/: bar (200; 4.897913ms)
Aug  5 14:31:39.319: INFO: (16) /api/v1/namespaces/proxy-4712/services/http:proxy-service-rzlrr:portname2/proxy/: bar (200; 5.114916ms)
Aug  5 14:31:39.319: INFO: (16) /api/v1/namespaces/proxy-4712/services/https:proxy-service-rzlrr:tlsportname1/proxy/: tls baz (200; 5.170685ms)
Aug  5 14:31:39.319: INFO: (16) /api/v1/namespaces/proxy-4712/services/https:proxy-service-rzlrr:tlsportname2/proxy/: tls qux (200; 5.199039ms)
Aug  5 14:31:39.319: INFO: (16) /api/v1/namespaces/proxy-4712/services/proxy-service-rzlrr:portname1/proxy/: foo (200; 5.239319ms)
Aug  5 14:31:39.321: INFO: (17) /api/v1/namespaces/proxy-4712/pods/proxy-service-rzlrr-2sslc:162/proxy/: bar (200; 2.164699ms)
Aug  5 14:31:39.322: INFO: (17) /api/v1/namespaces/proxy-4712/pods/http:proxy-service-rzlrr-2sslc:160/proxy/: foo (200; 2.651067ms)
Aug  5 14:31:39.323: INFO: (17) /api/v1/namespaces/proxy-4712/pods/http:proxy-service-rzlrr-2sslc:162/proxy/: bar (200; 3.624075ms)
Aug  5 14:31:39.323: INFO: (17) /api/v1/namespaces/proxy-4712/pods/https:proxy-service-rzlrr-2sslc:462/proxy/: tls qux (200; 3.886349ms)
Aug  5 14:31:39.325: INFO: (17) /api/v1/namespaces/proxy-4712/pods/https:proxy-service-rzlrr-2sslc:460/proxy/: tls baz (200; 5.594856ms)
Aug  5 14:31:39.325: INFO: (17) /api/v1/namespaces/proxy-4712/pods/proxy-service-rzlrr-2sslc:160/proxy/: foo (200; 5.539889ms)
Aug  5 14:31:39.325: INFO: (17) /api/v1/namespaces/proxy-4712/services/https:proxy-service-rzlrr:tlsportname1/proxy/: tls baz (200; 5.582334ms)
Aug  5 14:31:39.325: INFO: (17) /api/v1/namespaces/proxy-4712/services/http:proxy-service-rzlrr:portname2/proxy/: bar (200; 5.576798ms)
Aug  5 14:31:39.325: INFO: (17) /api/v1/namespaces/proxy-4712/pods/proxy-service-rzlrr-2sslc/proxy/: test (200; 5.647026ms)
Aug  5 14:31:39.325: INFO: (17) /api/v1/namespaces/proxy-4712/services/proxy-service-rzlrr:portname2/proxy/: bar (200; 5.699316ms)
Aug  5 14:31:39.325: INFO: (17) /api/v1/namespaces/proxy-4712/pods/https:proxy-service-rzlrr-2sslc:443/proxy/: test<... (200; 5.687681ms)
Aug  5 14:31:39.325: INFO: (17) /api/v1/namespaces/proxy-4712/pods/http:proxy-service-rzlrr-2sslc:1080/proxy/: ... (200; 5.768688ms)
Aug  5 14:31:39.325: INFO: (17) /api/v1/namespaces/proxy-4712/services/http:proxy-service-rzlrr:portname1/proxy/: foo (200; 5.680618ms)
Aug  5 14:31:39.325: INFO: (17) /api/v1/namespaces/proxy-4712/services/https:proxy-service-rzlrr:tlsportname2/proxy/: tls qux (200; 5.686793ms)
Aug  5 14:31:39.328: INFO: (18) /api/v1/namespaces/proxy-4712/pods/proxy-service-rzlrr-2sslc/proxy/: test (200; 3.080505ms)
Aug  5 14:31:39.329: INFO: (18) /api/v1/namespaces/proxy-4712/pods/https:proxy-service-rzlrr-2sslc:460/proxy/: tls baz (200; 3.906776ms)
Aug  5 14:31:39.329: INFO: (18) /api/v1/namespaces/proxy-4712/pods/https:proxy-service-rzlrr-2sslc:462/proxy/: tls qux (200; 3.830154ms)
Aug  5 14:31:39.329: INFO: (18) /api/v1/namespaces/proxy-4712/pods/proxy-service-rzlrr-2sslc:162/proxy/: bar (200; 3.803702ms)
Aug  5 14:31:39.329: INFO: (18) /api/v1/namespaces/proxy-4712/pods/http:proxy-service-rzlrr-2sslc:160/proxy/: foo (200; 3.882967ms)
Aug  5 14:31:39.329: INFO: (18) /api/v1/namespaces/proxy-4712/pods/https:proxy-service-rzlrr-2sslc:443/proxy/: ... (200; 3.884601ms)
Aug  5 14:31:39.329: INFO: (18) /api/v1/namespaces/proxy-4712/pods/http:proxy-service-rzlrr-2sslc:162/proxy/: bar (200; 4.353372ms)
Aug  5 14:31:39.329: INFO: (18) /api/v1/namespaces/proxy-4712/pods/proxy-service-rzlrr-2sslc:1080/proxy/: test<... (200; 4.412003ms)
Aug  5 14:31:39.330: INFO: (18) /api/v1/namespaces/proxy-4712/services/http:proxy-service-rzlrr:portname1/proxy/: foo (200; 4.577671ms)
Aug  5 14:31:39.331: INFO: (18) /api/v1/namespaces/proxy-4712/services/proxy-service-rzlrr:portname1/proxy/: foo (200; 6.286166ms)
Aug  5 14:31:39.332: INFO: (18) /api/v1/namespaces/proxy-4712/services/proxy-service-rzlrr:portname2/proxy/: bar (200; 6.593018ms)
Aug  5 14:31:39.332: INFO: (18) /api/v1/namespaces/proxy-4712/services/https:proxy-service-rzlrr:tlsportname1/proxy/: tls baz (200; 6.768751ms)
Aug  5 14:31:39.332: INFO: (18) /api/v1/namespaces/proxy-4712/services/https:proxy-service-rzlrr:tlsportname2/proxy/: tls qux (200; 6.771739ms)
Aug  5 14:31:39.333: INFO: (18) /api/v1/namespaces/proxy-4712/services/http:proxy-service-rzlrr:portname2/proxy/: bar (200; 7.524431ms)
Aug  5 14:31:39.335: INFO: (19) /api/v1/namespaces/proxy-4712/pods/http:proxy-service-rzlrr-2sslc:1080/proxy/: ... (200; 2.783987ms)
Aug  5 14:31:39.336: INFO: (19) /api/v1/namespaces/proxy-4712/pods/http:proxy-service-rzlrr-2sslc:160/proxy/: foo (200; 2.922059ms)
Aug  5 14:31:39.336: INFO: (19) /api/v1/namespaces/proxy-4712/pods/proxy-service-rzlrr-2sslc:160/proxy/: foo (200; 3.053173ms)
Aug  5 14:31:39.336: INFO: (19) /api/v1/namespaces/proxy-4712/pods/https:proxy-service-rzlrr-2sslc:443/proxy/: test<... (200; 3.759288ms)
Aug  5 14:31:39.337: INFO: (19) /api/v1/namespaces/proxy-4712/pods/https:proxy-service-rzlrr-2sslc:462/proxy/: tls qux (200; 3.92252ms)
Aug  5 14:31:39.337: INFO: (19) /api/v1/namespaces/proxy-4712/services/proxy-service-rzlrr:portname2/proxy/: bar (200; 3.966596ms)
Aug  5 14:31:39.337: INFO: (19) /api/v1/namespaces/proxy-4712/pods/proxy-service-rzlrr-2sslc:162/proxy/: bar (200; 4.127542ms)
Aug  5 14:31:39.337: INFO: (19) /api/v1/namespaces/proxy-4712/services/https:proxy-service-rzlrr:tlsportname2/proxy/: tls qux (200; 4.269866ms)
Aug  5 14:31:39.337: INFO: (19) /api/v1/namespaces/proxy-4712/pods/proxy-service-rzlrr-2sslc/proxy/: test (200; 4.325108ms)
Aug  5 14:31:39.337: INFO: (19) /api/v1/namespaces/proxy-4712/services/http:proxy-service-rzlrr:portname2/proxy/: bar (200; 4.296307ms)
Aug  5 14:31:39.337: INFO: (19) /api/v1/namespaces/proxy-4712/services/https:proxy-service-rzlrr:tlsportname1/proxy/: tls baz (200; 4.374254ms)
Aug  5 14:31:39.337: INFO: (19) /api/v1/namespaces/proxy-4712/services/http:proxy-service-rzlrr:portname1/proxy/: foo (200; 4.323245ms)
Aug  5 14:31:39.337: INFO: (19) /api/v1/namespaces/proxy-4712/services/proxy-service-rzlrr:portname1/proxy/: foo (200; 4.329299ms)
STEP: deleting ReplicationController proxy-service-rzlrr in namespace proxy-4712, will wait for the garbage collector to delete the pods
Aug  5 14:31:39.397: INFO: Deleting ReplicationController proxy-service-rzlrr took: 8.66167ms
Aug  5 14:31:39.698: INFO: Terminating ReplicationController proxy-service-rzlrr pods took: 300.243419ms
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 14:31:46.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-4712" for this suite.
Aug  5 14:31:52.429: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 14:31:52.520: INFO: namespace proxy-4712 deletion completed in 6.113115182s

• [SLOW TEST:17.478 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy through a service and a pod  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 14:31:52.521: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-e6ca6f54-6a12-4265-83c4-9faf6c8c528a
STEP: Creating a pod to test consume secrets
Aug  5 14:31:52.621: INFO: Waiting up to 5m0s for pod "pod-secrets-14984758-127e-4e3e-8faa-dfaf678ba23c" in namespace "secrets-2015" to be "success or failure"
Aug  5 14:31:52.630: INFO: Pod "pod-secrets-14984758-127e-4e3e-8faa-dfaf678ba23c": Phase="Pending", Reason="", readiness=false. Elapsed: 9.069656ms
Aug  5 14:31:54.634: INFO: Pod "pod-secrets-14984758-127e-4e3e-8faa-dfaf678ba23c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012771648s
Aug  5 14:31:56.638: INFO: Pod "pod-secrets-14984758-127e-4e3e-8faa-dfaf678ba23c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017220584s
STEP: Saw pod success
Aug  5 14:31:56.638: INFO: Pod "pod-secrets-14984758-127e-4e3e-8faa-dfaf678ba23c" satisfied condition "success or failure"
Aug  5 14:31:56.641: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-14984758-127e-4e3e-8faa-dfaf678ba23c container secret-env-test: 
STEP: delete the pod
Aug  5 14:31:56.669: INFO: Waiting for pod pod-secrets-14984758-127e-4e3e-8faa-dfaf678ba23c to disappear
Aug  5 14:31:56.684: INFO: Pod pod-secrets-14984758-127e-4e3e-8faa-dfaf678ba23c no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 14:31:56.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2015" for this suite.
Aug  5 14:32:02.700: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 14:32:02.776: INFO: namespace secrets-2015 deletion completed in 6.088067249s

• [SLOW TEST:10.255 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 14:32:02.777: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Aug  5 14:32:02.864: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-9934'
Aug  5 14:32:02.956: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Aug  5 14:32:02.956: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
Aug  5 14:32:03.013: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
Aug  5 14:32:03.033: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Aug  5 14:32:03.041: INFO: scanned /root for discovery docs: 
Aug  5 14:32:03.041: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-9934'
Aug  5 14:32:18.935: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Aug  5 14:32:18.935: INFO: stdout: "Created e2e-test-nginx-rc-3b98896a00463ee00c63bd8b33b5cb82\nScaling up e2e-test-nginx-rc-3b98896a00463ee00c63bd8b33b5cb82 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-3b98896a00463ee00c63bd8b33b5cb82 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-3b98896a00463ee00c63bd8b33b5cb82 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Aug  5 14:32:18.935: INFO: stdout: "Created e2e-test-nginx-rc-3b98896a00463ee00c63bd8b33b5cb82\nScaling up e2e-test-nginx-rc-3b98896a00463ee00c63bd8b33b5cb82 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-3b98896a00463ee00c63bd8b33b5cb82 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-3b98896a00463ee00c63bd8b33b5cb82 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Aug  5 14:32:18.935: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-9934'
Aug  5 14:32:19.029: INFO: stderr: ""
Aug  5 14:32:19.029: INFO: stdout: "e2e-test-nginx-rc-3b98896a00463ee00c63bd8b33b5cb82-7pcnv "
Aug  5 14:32:19.029: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-3b98896a00463ee00c63bd8b33b5cb82-7pcnv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9934'
Aug  5 14:32:19.124: INFO: stderr: ""
Aug  5 14:32:19.124: INFO: stdout: "true"
Aug  5 14:32:19.124: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-3b98896a00463ee00c63bd8b33b5cb82-7pcnv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9934'
Aug  5 14:32:19.213: INFO: stderr: ""
Aug  5 14:32:19.213: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Aug  5 14:32:19.213: INFO: e2e-test-nginx-rc-3b98896a00463ee00c63bd8b33b5cb82-7pcnv is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522
Aug  5 14:32:19.213: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-9934'
Aug  5 14:32:19.312: INFO: stderr: ""
Aug  5 14:32:19.312: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 14:32:19.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9934" for this suite.
Aug  5 14:32:25.362: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 14:32:25.431: INFO: namespace kubectl-9934 deletion completed in 6.11467403s

• [SLOW TEST:22.655 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 14:32:25.432: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug  5 14:32:25.566: INFO: (0) /api/v1/nodes/iruya-worker/proxy/logs/: 
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug  5 14:32:31.816: INFO: Create a RollingUpdate DaemonSet
Aug  5 14:32:31.819: INFO: Check that daemon pods launch on every node of the cluster
Aug  5 14:32:31.824: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  5 14:32:31.862: INFO: Number of nodes with available pods: 0
Aug  5 14:32:31.862: INFO: Node iruya-worker is running more than one daemon pod
Aug  5 14:32:32.867: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  5 14:32:32.870: INFO: Number of nodes with available pods: 0
Aug  5 14:32:32.870: INFO: Node iruya-worker is running more than one daemon pod
Aug  5 14:32:33.866: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  5 14:32:33.870: INFO: Number of nodes with available pods: 0
Aug  5 14:32:33.870: INFO: Node iruya-worker is running more than one daemon pod
Aug  5 14:32:34.867: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  5 14:32:34.870: INFO: Number of nodes with available pods: 0
Aug  5 14:32:34.870: INFO: Node iruya-worker is running more than one daemon pod
Aug  5 14:32:35.868: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  5 14:32:35.871: INFO: Number of nodes with available pods: 1
Aug  5 14:32:35.871: INFO: Node iruya-worker2 is running more than one daemon pod
Aug  5 14:32:36.867: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  5 14:32:36.870: INFO: Number of nodes with available pods: 2
Aug  5 14:32:36.870: INFO: Number of running nodes: 2, number of available pods: 2
Aug  5 14:32:36.870: INFO: Update the DaemonSet to trigger a rollout
Aug  5 14:32:36.877: INFO: Updating DaemonSet daemon-set
Aug  5 14:32:39.924: INFO: Roll back the DaemonSet before rollout is complete
Aug  5 14:32:39.930: INFO: Updating DaemonSet daemon-set
Aug  5 14:32:39.930: INFO: Make sure DaemonSet rollback is complete
Aug  5 14:32:39.938: INFO: Wrong image for pod: daemon-set-wnbbn. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Aug  5 14:32:39.938: INFO: Pod daemon-set-wnbbn is not available
Aug  5 14:32:39.945: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  5 14:32:40.950: INFO: Wrong image for pod: daemon-set-wnbbn. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Aug  5 14:32:40.950: INFO: Pod daemon-set-wnbbn is not available
Aug  5 14:32:40.954: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  5 14:32:41.950: INFO: Wrong image for pod: daemon-set-wnbbn. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Aug  5 14:32:41.950: INFO: Pod daemon-set-wnbbn is not available
Aug  5 14:32:41.956: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  5 14:32:42.950: INFO: Wrong image for pod: daemon-set-wnbbn. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Aug  5 14:32:42.950: INFO: Pod daemon-set-wnbbn is not available
Aug  5 14:32:42.954: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  5 14:32:43.950: INFO: Wrong image for pod: daemon-set-wnbbn. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Aug  5 14:32:43.950: INFO: Pod daemon-set-wnbbn is not available
Aug  5 14:32:43.955: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  5 14:32:44.950: INFO: Wrong image for pod: daemon-set-wnbbn. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Aug  5 14:32:44.950: INFO: Pod daemon-set-wnbbn is not available
Aug  5 14:32:44.954: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug  5 14:32:45.950: INFO: Pod daemon-set-ct2mw is not available
Aug  5 14:32:45.954: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8846, will wait for the garbage collector to delete the pods
Aug  5 14:32:46.018: INFO: Deleting DaemonSet.extensions daemon-set took: 6.916173ms
Aug  5 14:32:46.319: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.216646ms
Aug  5 14:32:56.422: INFO: Number of nodes with available pods: 0
Aug  5 14:32:56.422: INFO: Number of running nodes: 0, number of available pods: 0
Aug  5 14:32:56.424: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8846/daemonsets","resourceVersion":"3109747"},"items":null}

Aug  5 14:32:56.426: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8846/pods","resourceVersion":"3109747"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 14:32:56.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-8846" for this suite.
Aug  5 14:33:02.461: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 14:33:02.560: INFO: namespace daemonsets-8846 deletion completed in 6.120732808s

• [SLOW TEST:30.836 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 14:33:02.562: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Aug  5 14:33:02.622: INFO: Waiting up to 5m0s for pod "pod-82acc108-791a-4203-a66d-278281dff1f2" in namespace "emptydir-5586" to be "success or failure"
Aug  5 14:33:02.627: INFO: Pod "pod-82acc108-791a-4203-a66d-278281dff1f2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.128209ms
Aug  5 14:33:04.631: INFO: Pod "pod-82acc108-791a-4203-a66d-278281dff1f2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008733212s
Aug  5 14:33:06.713: INFO: Pod "pod-82acc108-791a-4203-a66d-278281dff1f2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.090481045s
STEP: Saw pod success
Aug  5 14:33:06.713: INFO: Pod "pod-82acc108-791a-4203-a66d-278281dff1f2" satisfied condition "success or failure"
Aug  5 14:33:06.716: INFO: Trying to get logs from node iruya-worker2 pod pod-82acc108-791a-4203-a66d-278281dff1f2 container test-container: 
STEP: delete the pod
Aug  5 14:33:06.735: INFO: Waiting for pod pod-82acc108-791a-4203-a66d-278281dff1f2 to disappear
Aug  5 14:33:06.740: INFO: Pod pod-82acc108-791a-4203-a66d-278281dff1f2 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 14:33:06.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5586" for this suite.
Aug  5 14:33:12.768: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 14:33:12.851: INFO: namespace emptydir-5586 deletion completed in 6.107509074s

• [SLOW TEST:10.290 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 14:33:12.852: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test hostPath mode
Aug  5 14:33:12.911: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-6264" to be "success or failure"
Aug  5 14:33:12.914: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 3.550719ms
Aug  5 14:33:14.918: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007594647s
Aug  5 14:33:16.928: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017869014s
Aug  5 14:33:18.933: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.022472796s
STEP: Saw pod success
Aug  5 14:33:18.933: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Aug  5 14:33:18.936: INFO: Trying to get logs from node iruya-worker2 pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Aug  5 14:33:18.992: INFO: Waiting for pod pod-host-path-test to disappear
Aug  5 14:33:19.049: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 14:33:19.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-6264" for this suite.
Aug  5 14:33:25.068: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 14:33:25.139: INFO: namespace hostpath-6264 deletion completed in 6.085951911s

• [SLOW TEST:12.287 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 14:33:25.139: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-07ad7067-bf04-4a53-8864-4ce395cbcb81
STEP: Creating a pod to test consume secrets
Aug  5 14:33:25.263: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ce797c96-081d-444b-a59e-91c23a8a7bbf" in namespace "projected-3650" to be "success or failure"
Aug  5 14:33:25.285: INFO: Pod "pod-projected-secrets-ce797c96-081d-444b-a59e-91c23a8a7bbf": Phase="Pending", Reason="", readiness=false. Elapsed: 22.231494ms
Aug  5 14:33:27.324: INFO: Pod "pod-projected-secrets-ce797c96-081d-444b-a59e-91c23a8a7bbf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061387953s
Aug  5 14:33:29.329: INFO: Pod "pod-projected-secrets-ce797c96-081d-444b-a59e-91c23a8a7bbf": Phase="Running", Reason="", readiness=true. Elapsed: 4.066285105s
Aug  5 14:33:31.333: INFO: Pod "pod-projected-secrets-ce797c96-081d-444b-a59e-91c23a8a7bbf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.07007235s
STEP: Saw pod success
Aug  5 14:33:31.333: INFO: Pod "pod-projected-secrets-ce797c96-081d-444b-a59e-91c23a8a7bbf" satisfied condition "success or failure"
Aug  5 14:33:31.336: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-ce797c96-081d-444b-a59e-91c23a8a7bbf container projected-secret-volume-test: 
STEP: delete the pod
Aug  5 14:33:31.373: INFO: Waiting for pod pod-projected-secrets-ce797c96-081d-444b-a59e-91c23a8a7bbf to disappear
Aug  5 14:33:31.387: INFO: Pod pod-projected-secrets-ce797c96-081d-444b-a59e-91c23a8a7bbf no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 14:33:31.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3650" for this suite.
Aug  5 14:33:37.426: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 14:33:37.533: INFO: namespace projected-3650 deletion completed in 6.12413064s

• [SLOW TEST:12.394 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 14:33:37.534: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating replication controller svc-latency-rc in namespace svc-latency-480
I0805 14:33:37.613566       6 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-480, replica count: 1
I0805 14:33:38.664048       6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0805 14:33:39.664309       6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0805 14:33:40.664506       6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0805 14:33:41.664823       6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Aug  5 14:33:41.821: INFO: Created: latency-svc-g5gzb
Aug  5 14:33:41.831: INFO: Got endpoints: latency-svc-g5gzb [66.424782ms]
Aug  5 14:33:41.869: INFO: Created: latency-svc-g754j
Aug  5 14:33:41.885: INFO: Got endpoints: latency-svc-g754j [54.100799ms]
Aug  5 14:33:41.918: INFO: Created: latency-svc-rlkqp
Aug  5 14:33:41.989: INFO: Got endpoints: latency-svc-rlkqp [157.435396ms]
Aug  5 14:33:41.995: INFO: Created: latency-svc-srbzh
Aug  5 14:33:42.006: INFO: Got endpoints: latency-svc-srbzh [174.56756ms]
Aug  5 14:33:42.055: INFO: Created: latency-svc-wm7gc
Aug  5 14:33:42.073: INFO: Got endpoints: latency-svc-wm7gc [241.044171ms]
Aug  5 14:33:42.144: INFO: Created: latency-svc-xhxjm
Aug  5 14:33:42.147: INFO: Got endpoints: latency-svc-xhxjm [315.862091ms]
Aug  5 14:33:42.187: INFO: Created: latency-svc-tdn7f
Aug  5 14:33:42.198: INFO: Got endpoints: latency-svc-tdn7f [366.881781ms]
Aug  5 14:33:42.218: INFO: Created: latency-svc-g8hxp
Aug  5 14:33:42.228: INFO: Got endpoints: latency-svc-g8hxp [397.083845ms]
Aug  5 14:33:42.290: INFO: Created: latency-svc-q75hp
Aug  5 14:33:42.314: INFO: Got endpoints: latency-svc-q75hp [482.370863ms]
Aug  5 14:33:42.314: INFO: Created: latency-svc-nsqq9
Aug  5 14:33:42.325: INFO: Got endpoints: latency-svc-nsqq9 [493.772767ms]
Aug  5 14:33:42.343: INFO: Created: latency-svc-j45kf
Aug  5 14:33:42.355: INFO: Got endpoints: latency-svc-j45kf [523.628391ms]
Aug  5 14:33:42.373: INFO: Created: latency-svc-sstj2
Aug  5 14:33:42.386: INFO: Got endpoints: latency-svc-sstj2 [554.590432ms]
Aug  5 14:33:42.438: INFO: Created: latency-svc-92gnc
Aug  5 14:33:42.476: INFO: Got endpoints: latency-svc-92gnc [644.540365ms]
Aug  5 14:33:42.478: INFO: Created: latency-svc-d4t7g
Aug  5 14:33:42.488: INFO: Got endpoints: latency-svc-d4t7g [656.968667ms]
Aug  5 14:33:42.534: INFO: Created: latency-svc-5qmb8
Aug  5 14:33:42.576: INFO: Got endpoints: latency-svc-5qmb8 [744.982963ms]
Aug  5 14:33:42.602: INFO: Created: latency-svc-xdt9x
Aug  5 14:33:42.614: INFO: Got endpoints: latency-svc-xdt9x [782.421901ms]
Aug  5 14:33:42.655: INFO: Created: latency-svc-x9vbt
Aug  5 14:33:42.669: INFO: Got endpoints: latency-svc-x9vbt [783.176996ms]
Aug  5 14:33:42.733: INFO: Created: latency-svc-689nc
Aug  5 14:33:42.751: INFO: Got endpoints: latency-svc-689nc [762.467719ms]
Aug  5 14:33:42.776: INFO: Created: latency-svc-bmjgc
Aug  5 14:33:42.788: INFO: Got endpoints: latency-svc-bmjgc [782.227168ms]
Aug  5 14:33:42.814: INFO: Created: latency-svc-2vm2z
Aug  5 14:33:42.899: INFO: Got endpoints: latency-svc-2vm2z [826.322893ms]
Aug  5 14:33:42.903: INFO: Created: latency-svc-fwrhx
Aug  5 14:33:42.914: INFO: Got endpoints: latency-svc-fwrhx [766.858036ms]
Aug  5 14:33:42.937: INFO: Created: latency-svc-c474j
Aug  5 14:33:42.958: INFO: Got endpoints: latency-svc-c474j [759.668286ms]
Aug  5 14:33:42.991: INFO: Created: latency-svc-vlmdn
Aug  5 14:33:43.031: INFO: Got endpoints: latency-svc-vlmdn [802.427736ms]
Aug  5 14:33:43.052: INFO: Created: latency-svc-b6kcd
Aug  5 14:33:43.081: INFO: Got endpoints: latency-svc-b6kcd [767.63305ms]
Aug  5 14:33:43.124: INFO: Created: latency-svc-6bjcw
Aug  5 14:33:43.198: INFO: Got endpoints: latency-svc-6bjcw [873.194284ms]
Aug  5 14:33:43.205: INFO: Created: latency-svc-r4rks
Aug  5 14:33:43.210: INFO: Got endpoints: latency-svc-r4rks [855.309363ms]
Aug  5 14:33:43.236: INFO: Created: latency-svc-4r7js
Aug  5 14:33:43.247: INFO: Got endpoints: latency-svc-4r7js [860.942826ms]
Aug  5 14:33:43.266: INFO: Created: latency-svc-vbhsx
Aug  5 14:33:43.277: INFO: Got endpoints: latency-svc-vbhsx [801.263998ms]
Aug  5 14:33:43.396: INFO: Created: latency-svc-nzrlk
Aug  5 14:33:43.401: INFO: Got endpoints: latency-svc-nzrlk [912.245618ms]
Aug  5 14:33:43.435: INFO: Created: latency-svc-z4lgv
Aug  5 14:33:43.472: INFO: Got endpoints: latency-svc-z4lgv [895.547043ms]
Aug  5 14:33:43.488: INFO: Created: latency-svc-bxp55
Aug  5 14:33:43.545: INFO: Got endpoints: latency-svc-bxp55 [931.3256ms]
Aug  5 14:33:43.552: INFO: Created: latency-svc-mz4hg
Aug  5 14:33:43.560: INFO: Got endpoints: latency-svc-mz4hg [891.035675ms]
Aug  5 14:33:43.578: INFO: Created: latency-svc-jrg57
Aug  5 14:33:43.603: INFO: Got endpoints: latency-svc-jrg57 [851.394351ms]
Aug  5 14:33:43.634: INFO: Created: latency-svc-4bknf
Aug  5 14:33:43.645: INFO: Got endpoints: latency-svc-4bknf [856.375605ms]
Aug  5 14:33:43.696: INFO: Created: latency-svc-2bkgz
Aug  5 14:33:43.700: INFO: Got endpoints: latency-svc-2bkgz [800.743277ms]
Aug  5 14:33:43.733: INFO: Created: latency-svc-gqghh
Aug  5 14:33:43.742: INFO: Got endpoints: latency-svc-gqghh [827.507412ms]
Aug  5 14:33:43.773: INFO: Created: latency-svc-z7kzx
Aug  5 14:33:43.851: INFO: Got endpoints: latency-svc-z7kzx [892.871503ms]
Aug  5 14:33:43.866: INFO: Created: latency-svc-jdbpm
Aug  5 14:33:43.887: INFO: Got endpoints: latency-svc-jdbpm [856.282999ms]
Aug  5 14:33:43.915: INFO: Created: latency-svc-xdcvr
Aug  5 14:33:43.929: INFO: Got endpoints: latency-svc-xdcvr [847.400061ms]
Aug  5 14:33:44.019: INFO: Created: latency-svc-n857t
Aug  5 14:33:44.023: INFO: Got endpoints: latency-svc-n857t [824.18521ms]
Aug  5 14:33:44.051: INFO: Created: latency-svc-4mhkr
Aug  5 14:33:44.067: INFO: Got endpoints: latency-svc-4mhkr [857.000156ms]
Aug  5 14:33:44.098: INFO: Created: latency-svc-ttd8g
Aug  5 14:33:44.192: INFO: Got endpoints: latency-svc-ttd8g [945.095257ms]
Aug  5 14:33:44.197: INFO: Created: latency-svc-bxjfb
Aug  5 14:33:44.206: INFO: Got endpoints: latency-svc-bxjfb [928.40074ms]
Aug  5 14:33:44.239: INFO: Created: latency-svc-gggft
Aug  5 14:33:44.254: INFO: Got endpoints: latency-svc-gggft [853.414824ms]
Aug  5 14:33:44.275: INFO: Created: latency-svc-wjr9h
Aug  5 14:33:44.284: INFO: Got endpoints: latency-svc-wjr9h [812.068272ms]
Aug  5 14:33:44.343: INFO: Created: latency-svc-xzd4j
Aug  5 14:33:44.345: INFO: Got endpoints: latency-svc-xzd4j [799.857131ms]
Aug  5 14:33:44.376: INFO: Created: latency-svc-whn6p
Aug  5 14:33:44.387: INFO: Got endpoints: latency-svc-whn6p [827.332596ms]
Aug  5 14:33:44.418: INFO: Created: latency-svc-kjnzx
Aug  5 14:33:44.430: INFO: Got endpoints: latency-svc-kjnzx [827.083261ms]
Aug  5 14:33:44.501: INFO: Created: latency-svc-q7c44
Aug  5 14:33:44.503: INFO: Got endpoints: latency-svc-q7c44 [857.842214ms]
Aug  5 14:33:44.581: INFO: Created: latency-svc-tjqqf
Aug  5 14:33:44.592: INFO: Got endpoints: latency-svc-tjqqf [892.414054ms]
Aug  5 14:33:44.630: INFO: Created: latency-svc-zcdnk
Aug  5 14:33:44.652: INFO: Got endpoints: latency-svc-zcdnk [909.87665ms]
Aug  5 14:33:44.683: INFO: Created: latency-svc-m74c8
Aug  5 14:33:44.707: INFO: Got endpoints: latency-svc-m74c8 [856.340293ms]
Aug  5 14:33:44.798: INFO: Created: latency-svc-d7qzd
Aug  5 14:33:44.802: INFO: Got endpoints: latency-svc-d7qzd [914.400183ms]
Aug  5 14:33:44.831: INFO: Created: latency-svc-h4wzw
Aug  5 14:33:44.845: INFO: Got endpoints: latency-svc-h4wzw [915.925795ms]
Aug  5 14:33:44.863: INFO: Created: latency-svc-r8rv5
Aug  5 14:33:44.875: INFO: Got endpoints: latency-svc-r8rv5 [852.590816ms]
Aug  5 14:33:44.953: INFO: Created: latency-svc-s8dzk
Aug  5 14:33:44.955: INFO: Got endpoints: latency-svc-s8dzk [887.520911ms]
Aug  5 14:33:44.988: INFO: Created: latency-svc-z5fjq
Aug  5 14:33:45.002: INFO: Got endpoints: latency-svc-z5fjq [809.462899ms]
Aug  5 14:33:45.018: INFO: Created: latency-svc-p4sbz
Aug  5 14:33:45.032: INFO: Got endpoints: latency-svc-p4sbz [825.976375ms]
Aug  5 14:33:45.115: INFO: Created: latency-svc-tmp89
Aug  5 14:33:45.123: INFO: Got endpoints: latency-svc-tmp89 [868.293623ms]
Aug  5 14:33:45.150: INFO: Created: latency-svc-hdk7g
Aug  5 14:33:45.164: INFO: Got endpoints: latency-svc-hdk7g [880.320409ms]
Aug  5 14:33:45.185: INFO: Created: latency-svc-44x42
Aug  5 14:33:45.201: INFO: Got endpoints: latency-svc-44x42 [855.282206ms]
Aug  5 14:33:45.253: INFO: Created: latency-svc-r7wwd
Aug  5 14:33:45.282: INFO: Got endpoints: latency-svc-r7wwd [894.882527ms]
Aug  5 14:33:45.319: INFO: Created: latency-svc-xfhj7
Aug  5 14:33:45.426: INFO: Got endpoints: latency-svc-xfhj7 [995.834662ms]
Aug  5 14:33:45.428: INFO: Created: latency-svc-gs922
Aug  5 14:33:45.442: INFO: Got endpoints: latency-svc-gs922 [939.329369ms]
Aug  5 14:33:45.491: INFO: Created: latency-svc-8rvms
Aug  5 14:33:45.508: INFO: Got endpoints: latency-svc-8rvms [915.531981ms]
Aug  5 14:33:45.582: INFO: Created: latency-svc-xdrdw
Aug  5 14:33:45.586: INFO: Got endpoints: latency-svc-xdrdw [934.312692ms]
Aug  5 14:33:45.618: INFO: Created: latency-svc-hbs7t
Aug  5 14:33:45.636: INFO: Got endpoints: latency-svc-hbs7t [928.768699ms]
Aug  5 14:33:45.677: INFO: Created: latency-svc-c9jn9
Aug  5 14:33:45.713: INFO: Got endpoints: latency-svc-c9jn9 [911.123968ms]
Aug  5 14:33:45.719: INFO: Created: latency-svc-fxp45
Aug  5 14:33:45.731: INFO: Got endpoints: latency-svc-fxp45 [885.851774ms]
Aug  5 14:33:45.755: INFO: Created: latency-svc-bz4kf
Aug  5 14:33:45.767: INFO: Got endpoints: latency-svc-bz4kf [891.839696ms]
Aug  5 14:33:45.787: INFO: Created: latency-svc-zjhbg
Aug  5 14:33:45.797: INFO: Got endpoints: latency-svc-zjhbg [842.087854ms]
Aug  5 14:33:45.863: INFO: Created: latency-svc-7npv5
Aug  5 14:33:45.866: INFO: Got endpoints: latency-svc-7npv5 [864.474542ms]
Aug  5 14:33:45.942: INFO: Created: latency-svc-qb55g
Aug  5 14:33:45.995: INFO: Got endpoints: latency-svc-qb55g [962.919516ms]
Aug  5 14:33:46.032: INFO: Created: latency-svc-rpdk5
Aug  5 14:33:46.050: INFO: Got endpoints: latency-svc-rpdk5 [927.025829ms]
Aug  5 14:33:46.087: INFO: Created: latency-svc-8nvzh
Aug  5 14:33:46.169: INFO: Got endpoints: latency-svc-8nvzh [1.0042581s]
Aug  5 14:33:46.171: INFO: Created: latency-svc-kslm4
Aug  5 14:33:46.177: INFO: Got endpoints: latency-svc-kslm4 [976.083678ms]
Aug  5 14:33:46.200: INFO: Created: latency-svc-vrnd6
Aug  5 14:33:46.213: INFO: Got endpoints: latency-svc-vrnd6 [930.759373ms]
Aug  5 14:33:46.259: INFO: Created: latency-svc-8pbvj
Aug  5 14:33:46.306: INFO: Got endpoints: latency-svc-8pbvj [880.480529ms]
Aug  5 14:33:46.319: INFO: Created: latency-svc-lvnj6
Aug  5 14:33:46.334: INFO: Got endpoints: latency-svc-lvnj6 [891.431087ms]
Aug  5 14:33:46.356: INFO: Created: latency-svc-rw55z
Aug  5 14:33:46.370: INFO: Got endpoints: latency-svc-rw55z [861.750814ms]
Aug  5 14:33:46.392: INFO: Created: latency-svc-lpq7n
Aug  5 14:33:46.400: INFO: Got endpoints: latency-svc-lpq7n [813.551122ms]
Aug  5 14:33:46.438: INFO: Created: latency-svc-hcb7t
Aug  5 14:33:46.441: INFO: Got endpoints: latency-svc-hcb7t [805.146273ms]
Aug  5 14:33:46.468: INFO: Created: latency-svc-9b96l
Aug  5 14:33:46.485: INFO: Got endpoints: latency-svc-9b96l [772.201911ms]
Aug  5 14:33:46.511: INFO: Created: latency-svc-m742c
Aug  5 14:33:46.612: INFO: Got endpoints: latency-svc-m742c [880.747178ms]
Aug  5 14:33:46.615: INFO: Created: latency-svc-6c564
Aug  5 14:33:46.623: INFO: Got endpoints: latency-svc-6c564 [855.996358ms]
Aug  5 14:33:46.644: INFO: Created: latency-svc-fdt7b
Aug  5 14:33:46.660: INFO: Got endpoints: latency-svc-fdt7b [862.206939ms]
Aug  5 14:33:46.699: INFO: Created: latency-svc-d9jmz
Aug  5 14:33:46.708: INFO: Got endpoints: latency-svc-d9jmz [841.398499ms]
Aug  5 14:33:46.762: INFO: Created: latency-svc-fhcc9
Aug  5 14:33:46.787: INFO: Got endpoints: latency-svc-fhcc9 [791.743834ms]
Aug  5 14:33:46.787: INFO: Created: latency-svc-7gj8t
Aug  5 14:33:46.798: INFO: Got endpoints: latency-svc-7gj8t [748.710028ms]
Aug  5 14:33:46.828: INFO: Created: latency-svc-mtjsk
Aug  5 14:33:46.935: INFO: Got endpoints: latency-svc-mtjsk [765.657315ms]
Aug  5 14:33:46.945: INFO: Created: latency-svc-7ht7n
Aug  5 14:33:46.967: INFO: Got endpoints: latency-svc-7ht7n [790.70563ms]
Aug  5 14:33:47.009: INFO: Created: latency-svc-t2789
Aug  5 14:33:47.021: INFO: Got endpoints: latency-svc-t2789 [808.208218ms]
Aug  5 14:33:47.091: INFO: Created: latency-svc-n4cm4
Aug  5 14:33:47.094: INFO: Got endpoints: latency-svc-n4cm4 [787.295639ms]
Aug  5 14:33:47.122: INFO: Created: latency-svc-v6q4m
Aug  5 14:33:47.136: INFO: Got endpoints: latency-svc-v6q4m [801.932491ms]
Aug  5 14:33:47.159: INFO: Created: latency-svc-xf6mc
Aug  5 14:33:47.172: INFO: Got endpoints: latency-svc-xf6mc [802.164093ms]
Aug  5 14:33:47.190: INFO: Created: latency-svc-qdwx2
Aug  5 14:33:47.246: INFO: Got endpoints: latency-svc-qdwx2 [846.089605ms]
Aug  5 14:33:47.262: INFO: Created: latency-svc-h54l5
Aug  5 14:33:47.275: INFO: Got endpoints: latency-svc-h54l5 [833.226968ms]
Aug  5 14:33:47.297: INFO: Created: latency-svc-ssth5
Aug  5 14:33:47.311: INFO: Got endpoints: latency-svc-ssth5 [826.017919ms]
Aug  5 14:33:47.332: INFO: Created: latency-svc-czhns
Aug  5 14:33:47.390: INFO: Got endpoints: latency-svc-czhns [778.524212ms]
Aug  5 14:33:47.392: INFO: Created: latency-svc-zpkm6
Aug  5 14:33:47.407: INFO: Got endpoints: latency-svc-zpkm6 [784.240859ms]
Aug  5 14:33:47.429: INFO: Created: latency-svc-l2p4b
Aug  5 14:33:47.444: INFO: Got endpoints: latency-svc-l2p4b [784.159084ms]
Aug  5 14:33:47.465: INFO: Created: latency-svc-b6gbv
Aug  5 14:33:47.474: INFO: Got endpoints: latency-svc-b6gbv [766.071108ms]
Aug  5 14:33:47.546: INFO: Created: latency-svc-s4lfb
Aug  5 14:33:47.549: INFO: Got endpoints: latency-svc-s4lfb [762.673621ms]
Aug  5 14:33:47.603: INFO: Created: latency-svc-mk5j9
Aug  5 14:33:47.632: INFO: Got endpoints: latency-svc-mk5j9 [833.149798ms]
Aug  5 14:33:47.690: INFO: Created: latency-svc-f558b
Aug  5 14:33:47.700: INFO: Got endpoints: latency-svc-f558b [765.914027ms]
Aug  5 14:33:47.722: INFO: Created: latency-svc-df2gc
Aug  5 14:33:47.730: INFO: Got endpoints: latency-svc-df2gc [762.829477ms]
Aug  5 14:33:47.753: INFO: Created: latency-svc-zfdx6
Aug  5 14:33:47.767: INFO: Got endpoints: latency-svc-zfdx6 [745.86806ms]
Aug  5 14:33:47.789: INFO: Created: latency-svc-5qx47
Aug  5 14:33:47.851: INFO: Got endpoints: latency-svc-5qx47 [757.106188ms]
Aug  5 14:33:47.873: INFO: Created: latency-svc-djvkj
Aug  5 14:33:47.888: INFO: Got endpoints: latency-svc-djvkj [751.831258ms]
Aug  5 14:33:47.907: INFO: Created: latency-svc-v4w9m
Aug  5 14:33:47.924: INFO: Got endpoints: latency-svc-v4w9m [752.160407ms]
Aug  5 14:33:47.944: INFO: Created: latency-svc-w22w6
Aug  5 14:33:48.006: INFO: Got endpoints: latency-svc-w22w6 [760.489029ms]
Aug  5 14:33:48.009: INFO: Created: latency-svc-b5k5f
Aug  5 14:33:48.041: INFO: Got endpoints: latency-svc-b5k5f [766.006008ms]
Aug  5 14:33:48.077: INFO: Created: latency-svc-5hrqw
Aug  5 14:33:48.093: INFO: Got endpoints: latency-svc-5hrqw [781.499675ms]
Aug  5 14:33:48.151: INFO: Created: latency-svc-m42dx
Aug  5 14:33:48.159: INFO: Got endpoints: latency-svc-m42dx [768.856017ms]
Aug  5 14:33:48.183: INFO: Created: latency-svc-t6kmz
Aug  5 14:33:48.196: INFO: Got endpoints: latency-svc-t6kmz [788.587992ms]
Aug  5 14:33:48.220: INFO: Created: latency-svc-4rpxr
Aug  5 14:33:48.245: INFO: Got endpoints: latency-svc-4rpxr [800.592068ms]
Aug  5 14:33:48.318: INFO: Created: latency-svc-mr876
Aug  5 14:33:48.328: INFO: Got endpoints: latency-svc-mr876 [853.857615ms]
Aug  5 14:33:48.347: INFO: Created: latency-svc-d4ncc
Aug  5 14:33:48.358: INFO: Got endpoints: latency-svc-d4ncc [808.363808ms]
Aug  5 14:33:48.375: INFO: Created: latency-svc-26827
Aug  5 14:33:48.389: INFO: Got endpoints: latency-svc-26827 [757.354694ms]
Aug  5 14:33:48.411: INFO: Created: latency-svc-wnhq8
Aug  5 14:33:48.450: INFO: Got endpoints: latency-svc-wnhq8 [749.171424ms]
Aug  5 14:33:48.453: INFO: Created: latency-svc-g65gp
Aug  5 14:33:48.467: INFO: Got endpoints: latency-svc-g65gp [736.378363ms]
Aug  5 14:33:48.490: INFO: Created: latency-svc-7n4q9
Aug  5 14:33:48.503: INFO: Got endpoints: latency-svc-7n4q9 [736.043498ms]
Aug  5 14:33:48.528: INFO: Created: latency-svc-j6s26
Aug  5 14:33:48.539: INFO: Got endpoints: latency-svc-j6s26 [688.497789ms]
Aug  5 14:33:48.606: INFO: Created: latency-svc-fzxc2
Aug  5 14:33:48.612: INFO: Got endpoints: latency-svc-fzxc2 [724.164605ms]
Aug  5 14:33:48.635: INFO: Created: latency-svc-9snqm
Aug  5 14:33:48.648: INFO: Got endpoints: latency-svc-9snqm [723.853164ms]
Aug  5 14:33:48.682: INFO: Created: latency-svc-n8npx
Aug  5 14:33:48.749: INFO: Got endpoints: latency-svc-n8npx [742.166611ms]
Aug  5 14:33:48.759: INFO: Created: latency-svc-rvzpj
Aug  5 14:33:48.775: INFO: Got endpoints: latency-svc-rvzpj [733.963734ms]
Aug  5 14:33:48.802: INFO: Created: latency-svc-ld8b9
Aug  5 14:33:48.826: INFO: Got endpoints: latency-svc-ld8b9 [733.200438ms]
Aug  5 14:33:48.901: INFO: Created: latency-svc-v24h4
Aug  5 14:33:48.907: INFO: Got endpoints: latency-svc-v24h4 [747.649281ms]
Aug  5 14:33:48.941: INFO: Created: latency-svc-nxp57
Aug  5 14:33:48.955: INFO: Got endpoints: latency-svc-nxp57 [759.153728ms]
Aug  5 14:33:48.975: INFO: Created: latency-svc-kpcm6
Aug  5 14:33:49.055: INFO: Got endpoints: latency-svc-kpcm6 [809.99426ms]
Aug  5 14:33:49.059: INFO: Created: latency-svc-blszz
Aug  5 14:33:49.088: INFO: Got endpoints: latency-svc-blszz [760.09829ms]
Aug  5 14:33:49.108: INFO: Created: latency-svc-m95xq
Aug  5 14:33:49.125: INFO: Got endpoints: latency-svc-m95xq [766.770477ms]
Aug  5 14:33:49.211: INFO: Created: latency-svc-mbnj9
Aug  5 14:33:49.214: INFO: Got endpoints: latency-svc-mbnj9 [824.939398ms]
Aug  5 14:33:49.246: INFO: Created: latency-svc-qj4d9
Aug  5 14:33:49.263: INFO: Got endpoints: latency-svc-qj4d9 [812.880771ms]
Aug  5 14:33:49.281: INFO: Created: latency-svc-j6lqh
Aug  5 14:33:49.293: INFO: Got endpoints: latency-svc-j6lqh [826.121505ms]
Aug  5 14:33:49.372: INFO: Created: latency-svc-rvmfk
Aug  5 14:33:49.402: INFO: Got endpoints: latency-svc-rvmfk [898.77631ms]
Aug  5 14:33:49.403: INFO: Created: latency-svc-nhwrk
Aug  5 14:33:49.419: INFO: Got endpoints: latency-svc-nhwrk [880.042783ms]
Aug  5 14:33:49.444: INFO: Created: latency-svc-lmthw
Aug  5 14:33:49.456: INFO: Got endpoints: latency-svc-lmthw [844.187478ms]
Aug  5 14:33:49.522: INFO: Created: latency-svc-xj8z8
Aug  5 14:33:49.534: INFO: Got endpoints: latency-svc-xj8z8 [885.723544ms]
Aug  5 14:33:49.566: INFO: Created: latency-svc-nvm2g
Aug  5 14:33:49.577: INFO: Got endpoints: latency-svc-nvm2g [828.488065ms]
Aug  5 14:33:49.592: INFO: Created: latency-svc-tv4t8
Aug  5 14:33:49.665: INFO: Got endpoints: latency-svc-tv4t8 [890.520906ms]
Aug  5 14:33:49.707: INFO: Created: latency-svc-qwnh2
Aug  5 14:33:49.727: INFO: Got endpoints: latency-svc-qwnh2 [900.715383ms]
Aug  5 14:33:49.750: INFO: Created: latency-svc-f6m2r
Aug  5 14:33:49.763: INFO: Got endpoints: latency-svc-f6m2r [856.280562ms]
Aug  5 14:33:49.817: INFO: Created: latency-svc-kn4dz
Aug  5 14:33:49.823: INFO: Got endpoints: latency-svc-kn4dz [867.997871ms]
Aug  5 14:33:49.846: INFO: Created: latency-svc-tcz87
Aug  5 14:33:49.878: INFO: Got endpoints: latency-svc-tcz87 [822.941673ms]
Aug  5 14:33:49.972: INFO: Created: latency-svc-m2hmq
Aug  5 14:33:49.980: INFO: Got endpoints: latency-svc-m2hmq [891.578516ms]
Aug  5 14:33:50.044: INFO: Created: latency-svc-7blg4
Aug  5 14:33:50.058: INFO: Got endpoints: latency-svc-7blg4 [933.800346ms]
Aug  5 14:33:50.132: INFO: Created: latency-svc-8kwk6
Aug  5 14:33:50.148: INFO: Got endpoints: latency-svc-8kwk6 [933.939492ms]
Aug  5 14:33:50.186: INFO: Created: latency-svc-j27r2
Aug  5 14:33:50.204: INFO: Got endpoints: latency-svc-j27r2 [941.507748ms]
Aug  5 14:33:50.229: INFO: Created: latency-svc-hdffk
Aug  5 14:33:50.294: INFO: Got endpoints: latency-svc-hdffk [1.000656102s]
Aug  5 14:33:50.296: INFO: Created: latency-svc-9ps49
Aug  5 14:33:50.305: INFO: Got endpoints: latency-svc-9ps49 [902.757766ms]
Aug  5 14:33:50.338: INFO: Created: latency-svc-vrj47
Aug  5 14:33:50.353: INFO: Got endpoints: latency-svc-vrj47 [933.752433ms]
Aug  5 14:33:50.374: INFO: Created: latency-svc-tnkh7
Aug  5 14:33:50.432: INFO: Got endpoints: latency-svc-tnkh7 [975.555949ms]
Aug  5 14:33:50.451: INFO: Created: latency-svc-xn2dw
Aug  5 14:33:50.462: INFO: Got endpoints: latency-svc-xn2dw [927.665713ms]
Aug  5 14:33:50.487: INFO: Created: latency-svc-bb5wc
Aug  5 14:33:50.498: INFO: Got endpoints: latency-svc-bb5wc [920.358654ms]
Aug  5 14:33:50.611: INFO: Created: latency-svc-9lxj4
Aug  5 14:33:50.615: INFO: Got endpoints: latency-svc-9lxj4 [949.560505ms]
Aug  5 14:33:50.655: INFO: Created: latency-svc-95w8j
Aug  5 14:33:50.690: INFO: Got endpoints: latency-svc-95w8j [963.205445ms]
Aug  5 14:33:50.767: INFO: Created: latency-svc-84fws
Aug  5 14:33:50.769: INFO: Got endpoints: latency-svc-84fws [1.006141192s]
Aug  5 14:33:50.804: INFO: Created: latency-svc-tpjs4
Aug  5 14:33:50.817: INFO: Got endpoints: latency-svc-tpjs4 [993.706481ms]
Aug  5 14:33:50.842: INFO: Created: latency-svc-xghlb
Aug  5 14:33:50.853: INFO: Got endpoints: latency-svc-xghlb [975.619898ms]
Aug  5 14:33:50.918: INFO: Created: latency-svc-ctsrn
Aug  5 14:33:50.925: INFO: Got endpoints: latency-svc-ctsrn [945.750239ms]
Aug  5 14:33:50.954: INFO: Created: latency-svc-7nfd2
Aug  5 14:33:50.980: INFO: Got endpoints: latency-svc-7nfd2 [921.122243ms]
Aug  5 14:33:51.006: INFO: Created: latency-svc-m8lx7
Aug  5 14:33:51.016: INFO: Got endpoints: latency-svc-m8lx7 [867.696047ms]
Aug  5 14:33:51.081: INFO: Created: latency-svc-9nz5d
Aug  5 14:33:51.111: INFO: Got endpoints: latency-svc-9nz5d [906.666593ms]
Aug  5 14:33:51.228: INFO: Created: latency-svc-ts844
Aug  5 14:33:51.231: INFO: Got endpoints: latency-svc-ts844 [937.415833ms]
Aug  5 14:33:51.255: INFO: Created: latency-svc-pbzcx
Aug  5 14:33:51.265: INFO: Got endpoints: latency-svc-pbzcx [960.504881ms]
Aug  5 14:33:51.289: INFO: Created: latency-svc-zs56l
Aug  5 14:33:51.302: INFO: Got endpoints: latency-svc-zs56l [948.631802ms]
Aug  5 14:33:51.320: INFO: Created: latency-svc-hmwr2
Aug  5 14:33:51.414: INFO: Got endpoints: latency-svc-hmwr2 [982.033068ms]
Aug  5 14:33:51.442: INFO: Created: latency-svc-k8dz5
Aug  5 14:33:51.452: INFO: Got endpoints: latency-svc-k8dz5 [990.305515ms]
Aug  5 14:33:51.477: INFO: Created: latency-svc-7xxl6
Aug  5 14:33:51.488: INFO: Got endpoints: latency-svc-7xxl6 [990.472727ms]
Aug  5 14:33:51.514: INFO: Created: latency-svc-js5d2
Aug  5 14:33:51.563: INFO: Got endpoints: latency-svc-js5d2 [948.269829ms]
Aug  5 14:33:51.578: INFO: Created: latency-svc-49r46
Aug  5 14:33:51.602: INFO: Got endpoints: latency-svc-49r46 [911.513198ms]
Aug  5 14:33:51.643: INFO: Created: latency-svc-sdl64
Aug  5 14:33:51.657: INFO: Got endpoints: latency-svc-sdl64 [887.789679ms]
Aug  5 14:33:51.713: INFO: Created: latency-svc-9dngr
Aug  5 14:33:51.717: INFO: Got endpoints: latency-svc-9dngr [899.941955ms]
Aug  5 14:33:51.747: INFO: Created: latency-svc-xvjsg
Aug  5 14:33:51.771: INFO: Got endpoints: latency-svc-xvjsg [917.515649ms]
Aug  5 14:33:51.801: INFO: Created: latency-svc-x55sp
Aug  5 14:33:51.857: INFO: Got endpoints: latency-svc-x55sp [931.47276ms]
Aug  5 14:33:51.871: INFO: Created: latency-svc-49lzf
Aug  5 14:33:51.887: INFO: Got endpoints: latency-svc-49lzf [907.027647ms]
Aug  5 14:33:51.907: INFO: Created: latency-svc-ftlmb
Aug  5 14:33:51.923: INFO: Got endpoints: latency-svc-ftlmb [907.208708ms]
Aug  5 14:33:51.945: INFO: Created: latency-svc-pnjbn
Aug  5 14:33:52.013: INFO: Got endpoints: latency-svc-pnjbn [901.977656ms]
Aug  5 14:33:52.047: INFO: Created: latency-svc-tzz5g
Aug  5 14:33:52.073: INFO: Got endpoints: latency-svc-tzz5g [841.952489ms]
Aug  5 14:33:52.106: INFO: Created: latency-svc-c5j77
Aug  5 14:33:52.156: INFO: Got endpoints: latency-svc-c5j77 [890.67761ms]
Aug  5 14:33:52.165: INFO: Created: latency-svc-gc6j8
Aug  5 14:33:52.182: INFO: Got endpoints: latency-svc-gc6j8 [879.872462ms]
Aug  5 14:33:52.201: INFO: Created: latency-svc-76tlg
Aug  5 14:33:52.218: INFO: Got endpoints: latency-svc-76tlg [804.376491ms]
Aug  5 14:33:52.237: INFO: Created: latency-svc-bpzwq
Aug  5 14:33:52.255: INFO: Got endpoints: latency-svc-bpzwq [802.861615ms]
Aug  5 14:33:52.315: INFO: Created: latency-svc-pfrtx
Aug  5 14:33:52.317: INFO: Got endpoints: latency-svc-pfrtx [829.182665ms]
Aug  5 14:33:52.341: INFO: Created: latency-svc-nt867
Aug  5 14:33:52.351: INFO: Got endpoints: latency-svc-nt867 [787.347864ms]
Aug  5 14:33:52.370: INFO: Created: latency-svc-rc6d7
Aug  5 14:33:52.381: INFO: Got endpoints: latency-svc-rc6d7 [779.478629ms]
Aug  5 14:33:52.406: INFO: Created: latency-svc-mhddl
Aug  5 14:33:52.450: INFO: Got endpoints: latency-svc-mhddl [792.35765ms]
Aug  5 14:33:52.453: INFO: Created: latency-svc-hnfmv
Aug  5 14:33:52.459: INFO: Got endpoints: latency-svc-hnfmv [742.387581ms]
Aug  5 14:33:52.483: INFO: Created: latency-svc-vbxm2
Aug  5 14:33:52.490: INFO: Got endpoints: latency-svc-vbxm2 [718.913348ms]
Aug  5 14:33:52.526: INFO: Created: latency-svc-sxv2n
Aug  5 14:33:52.593: INFO: Got endpoints: latency-svc-sxv2n [736.362819ms]
Aug  5 14:33:52.621: INFO: Created: latency-svc-6vzxq
Aug  5 14:33:52.635: INFO: Got endpoints: latency-svc-6vzxq [748.589056ms]
Aug  5 14:33:52.669: INFO: Created: latency-svc-ld5d2
Aug  5 14:33:52.749: INFO: Got endpoints: latency-svc-ld5d2 [826.047155ms]
Aug  5 14:33:52.752: INFO: Created: latency-svc-x6z49
Aug  5 14:33:52.755: INFO: Got endpoints: latency-svc-x6z49 [742.519533ms]
Aug  5 14:33:52.784: INFO: Created: latency-svc-h9jbb
Aug  5 14:33:52.808: INFO: Got endpoints: latency-svc-h9jbb [735.014924ms]
Aug  5 14:33:52.839: INFO: Created: latency-svc-l6ltm
Aug  5 14:33:52.868: INFO: Got endpoints: latency-svc-l6ltm [712.493188ms]
Aug  5 14:33:52.892: INFO: Created: latency-svc-ts7mm
Aug  5 14:33:52.912: INFO: Got endpoints: latency-svc-ts7mm [730.564654ms]
Aug  5 14:33:52.957: INFO: Created: latency-svc-ppk5c
Aug  5 14:33:52.994: INFO: Got endpoints: latency-svc-ppk5c [776.310788ms]
Aug  5 14:33:53.011: INFO: Created: latency-svc-mw8w8
Aug  5 14:33:53.041: INFO: Got endpoints: latency-svc-mw8w8 [786.159583ms]
Aug  5 14:33:53.090: INFO: Created: latency-svc-7l8gp
Aug  5 14:33:53.123: INFO: Got endpoints: latency-svc-7l8gp [805.688964ms]
Aug  5 14:33:53.123: INFO: Latencies: [54.100799ms 157.435396ms 174.56756ms 241.044171ms 315.862091ms 366.881781ms 397.083845ms 482.370863ms 493.772767ms 523.628391ms 554.590432ms 644.540365ms 656.968667ms 688.497789ms 712.493188ms 718.913348ms 723.853164ms 724.164605ms 730.564654ms 733.200438ms 733.963734ms 735.014924ms 736.043498ms 736.362819ms 736.378363ms 742.166611ms 742.387581ms 742.519533ms 744.982963ms 745.86806ms 747.649281ms 748.589056ms 748.710028ms 749.171424ms 751.831258ms 752.160407ms 757.106188ms 757.354694ms 759.153728ms 759.668286ms 760.09829ms 760.489029ms 762.467719ms 762.673621ms 762.829477ms 765.657315ms 765.914027ms 766.006008ms 766.071108ms 766.770477ms 766.858036ms 767.63305ms 768.856017ms 772.201911ms 776.310788ms 778.524212ms 779.478629ms 781.499675ms 782.227168ms 782.421901ms 783.176996ms 784.159084ms 784.240859ms 786.159583ms 787.295639ms 787.347864ms 788.587992ms 790.70563ms 791.743834ms 792.35765ms 799.857131ms 800.592068ms 800.743277ms 801.263998ms 801.932491ms 802.164093ms 802.427736ms 802.861615ms 804.376491ms 805.146273ms 805.688964ms 808.208218ms 808.363808ms 809.462899ms 809.99426ms 812.068272ms 812.880771ms 813.551122ms 822.941673ms 824.18521ms 824.939398ms 825.976375ms 826.017919ms 826.047155ms 826.121505ms 826.322893ms 827.083261ms 827.332596ms 827.507412ms 828.488065ms 829.182665ms 833.149798ms 833.226968ms 841.398499ms 841.952489ms 842.087854ms 844.187478ms 846.089605ms 847.400061ms 851.394351ms 852.590816ms 853.414824ms 853.857615ms 855.282206ms 855.309363ms 855.996358ms 856.280562ms 856.282999ms 856.340293ms 856.375605ms 857.000156ms 857.842214ms 860.942826ms 861.750814ms 862.206939ms 864.474542ms 867.696047ms 867.997871ms 868.293623ms 873.194284ms 879.872462ms 880.042783ms 880.320409ms 880.480529ms 880.747178ms 885.723544ms 885.851774ms 887.520911ms 887.789679ms 890.520906ms 890.67761ms 891.035675ms 891.431087ms 891.578516ms 891.839696ms 892.414054ms 892.871503ms 894.882527ms 895.547043ms 898.77631ms 899.941955ms 900.715383ms 901.977656ms 902.757766ms 906.666593ms 907.027647ms 907.208708ms 909.87665ms 911.123968ms 911.513198ms 912.245618ms 914.400183ms 915.531981ms 915.925795ms 917.515649ms 920.358654ms 921.122243ms 927.025829ms 927.665713ms 928.40074ms 928.768699ms 930.759373ms 931.3256ms 931.47276ms 933.752433ms 933.800346ms 933.939492ms 934.312692ms 937.415833ms 939.329369ms 941.507748ms 945.095257ms 945.750239ms 948.269829ms 948.631802ms 949.560505ms 960.504881ms 962.919516ms 963.205445ms 975.555949ms 975.619898ms 976.083678ms 982.033068ms 990.305515ms 990.472727ms 993.706481ms 995.834662ms 1.000656102s 1.0042581s 1.006141192s]
Aug  5 14:33:53.124: INFO: 50 %ile: 829.182665ms
Aug  5 14:33:53.124: INFO: 90 %ile: 941.507748ms
Aug  5 14:33:53.124: INFO: 99 %ile: 1.0042581s
Aug  5 14:33:53.124: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 14:33:53.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-480" for this suite.
Aug  5 14:34:27.144: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 14:34:27.222: INFO: namespace svc-latency-480 deletion completed in 34.092711834s

• [SLOW TEST:49.689 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 14:34:27.223: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating pod
Aug  5 14:34:31.345: INFO: Pod pod-hostip-9fa0d873-57e6-490d-97dd-9f39703b560a has hostIP: 172.18.0.7
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 14:34:31.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3730" for this suite.
Aug  5 14:34:53.374: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 14:34:53.435: INFO: namespace pods-3730 deletion completed in 22.083740996s

• [SLOW TEST:26.213 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 14:34:53.436: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Aug  5 14:34:54.172: INFO: Waiting up to 5m0s for pod "downward-api-f175b3c3-64e7-479c-b7f5-36ee1ae0ed84" in namespace "downward-api-3330" to be "success or failure"
Aug  5 14:34:54.239: INFO: Pod "downward-api-f175b3c3-64e7-479c-b7f5-36ee1ae0ed84": Phase="Pending", Reason="", readiness=false. Elapsed: 67.55987ms
Aug  5 14:34:56.359: INFO: Pod "downward-api-f175b3c3-64e7-479c-b7f5-36ee1ae0ed84": Phase="Pending", Reason="", readiness=false. Elapsed: 2.187173447s
Aug  5 14:34:58.362: INFO: Pod "downward-api-f175b3c3-64e7-479c-b7f5-36ee1ae0ed84": Phase="Pending", Reason="", readiness=false. Elapsed: 4.190612911s
Aug  5 14:35:00.366: INFO: Pod "downward-api-f175b3c3-64e7-479c-b7f5-36ee1ae0ed84": Phase="Pending", Reason="", readiness=false. Elapsed: 6.194157707s
Aug  5 14:35:02.718: INFO: Pod "downward-api-f175b3c3-64e7-479c-b7f5-36ee1ae0ed84": Phase="Pending", Reason="", readiness=false. Elapsed: 8.546341094s
Aug  5 14:35:04.721: INFO: Pod "downward-api-f175b3c3-64e7-479c-b7f5-36ee1ae0ed84": Phase="Pending", Reason="", readiness=false. Elapsed: 10.549837178s
Aug  5 14:35:06.762: INFO: Pod "downward-api-f175b3c3-64e7-479c-b7f5-36ee1ae0ed84": Phase="Running", Reason="", readiness=true. Elapsed: 12.590531424s
Aug  5 14:35:08.765: INFO: Pod "downward-api-f175b3c3-64e7-479c-b7f5-36ee1ae0ed84": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.59367798s
STEP: Saw pod success
Aug  5 14:35:08.765: INFO: Pod "downward-api-f175b3c3-64e7-479c-b7f5-36ee1ae0ed84" satisfied condition "success or failure"
Aug  5 14:35:08.768: INFO: Trying to get logs from node iruya-worker pod downward-api-f175b3c3-64e7-479c-b7f5-36ee1ae0ed84 container dapi-container: 
STEP: delete the pod
Aug  5 14:35:08.796: INFO: Waiting for pod downward-api-f175b3c3-64e7-479c-b7f5-36ee1ae0ed84 to disappear
Aug  5 14:35:08.864: INFO: Pod downward-api-f175b3c3-64e7-479c-b7f5-36ee1ae0ed84 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 14:35:08.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3330" for this suite.
Aug  5 14:35:14.936: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 14:35:15.026: INFO: namespace downward-api-3330 deletion completed in 6.15884082s

• [SLOW TEST:21.590 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 14:35:15.026: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-7ff83676-b140-4969-8ab9-4dea149a0df8
STEP: Creating a pod to test consume secrets
Aug  5 14:35:15.079: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-dcba455a-b573-46ef-b5f9-953ddde8b073" in namespace "projected-6807" to be "success or failure"
Aug  5 14:35:15.134: INFO: Pod "pod-projected-secrets-dcba455a-b573-46ef-b5f9-953ddde8b073": Phase="Pending", Reason="", readiness=false. Elapsed: 54.497908ms
Aug  5 14:35:17.505: INFO: Pod "pod-projected-secrets-dcba455a-b573-46ef-b5f9-953ddde8b073": Phase="Pending", Reason="", readiness=false. Elapsed: 2.425881669s
Aug  5 14:35:19.529: INFO: Pod "pod-projected-secrets-dcba455a-b573-46ef-b5f9-953ddde8b073": Phase="Pending", Reason="", readiness=false. Elapsed: 4.449553998s
Aug  5 14:35:21.703: INFO: Pod "pod-projected-secrets-dcba455a-b573-46ef-b5f9-953ddde8b073": Phase="Pending", Reason="", readiness=false. Elapsed: 6.623556371s
Aug  5 14:35:23.706: INFO: Pod "pod-projected-secrets-dcba455a-b573-46ef-b5f9-953ddde8b073": Phase="Pending", Reason="", readiness=false. Elapsed: 8.626471266s
Aug  5 14:35:25.709: INFO: Pod "pod-projected-secrets-dcba455a-b573-46ef-b5f9-953ddde8b073": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.629766859s
STEP: Saw pod success
Aug  5 14:35:25.709: INFO: Pod "pod-projected-secrets-dcba455a-b573-46ef-b5f9-953ddde8b073" satisfied condition "success or failure"
Aug  5 14:35:25.711: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-dcba455a-b573-46ef-b5f9-953ddde8b073 container projected-secret-volume-test: 
STEP: delete the pod
Aug  5 14:35:26.635: INFO: Waiting for pod pod-projected-secrets-dcba455a-b573-46ef-b5f9-953ddde8b073 to disappear
Aug  5 14:35:26.671: INFO: Pod pod-projected-secrets-dcba455a-b573-46ef-b5f9-953ddde8b073 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 14:35:26.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6807" for this suite.
Aug  5 14:35:32.710: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 14:35:32.769: INFO: namespace projected-6807 deletion completed in 6.09448283s

• [SLOW TEST:17.743 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period 
  should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 14:35:32.770: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47
[It] should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up selector
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
Aug  5 14:35:44.973: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0'
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Aug  5 14:35:55.069: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 14:35:55.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5900" for this suite.
Aug  5 14:36:01.107: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 14:36:01.171: INFO: namespace pods-5900 deletion completed in 6.09695412s

• [SLOW TEST:28.402 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 14:36:01.172: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug  5 14:36:01.230: INFO: Creating ReplicaSet my-hostname-basic-c9586e94-9197-4e32-b221-fa5c8ef56698
Aug  5 14:36:01.240: INFO: Pod name my-hostname-basic-c9586e94-9197-4e32-b221-fa5c8ef56698: Found 0 pods out of 1
Aug  5 14:36:06.277: INFO: Pod name my-hostname-basic-c9586e94-9197-4e32-b221-fa5c8ef56698: Found 1 pods out of 1
Aug  5 14:36:06.277: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-c9586e94-9197-4e32-b221-fa5c8ef56698" is running
Aug  5 14:36:12.283: INFO: Pod "my-hostname-basic-c9586e94-9197-4e32-b221-fa5c8ef56698-tcq68" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-05 14:36:01 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-05 14:36:01 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-c9586e94-9197-4e32-b221-fa5c8ef56698]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-05 14:36:01 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-c9586e94-9197-4e32-b221-fa5c8ef56698]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-05 14:36:01 +0000 UTC Reason: Message:}])
Aug  5 14:36:12.283: INFO: Trying to dial the pod
Aug  5 14:36:17.289: INFO: Controller my-hostname-basic-c9586e94-9197-4e32-b221-fa5c8ef56698: Got expected result from replica 1 [my-hostname-basic-c9586e94-9197-4e32-b221-fa5c8ef56698-tcq68]: "my-hostname-basic-c9586e94-9197-4e32-b221-fa5c8ef56698-tcq68", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 14:36:17.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-8974" for this suite.
Aug  5 14:36:23.303: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 14:36:23.357: INFO: namespace replicaset-8974 deletion completed in 6.066438778s

• [SLOW TEST:22.186 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 14:36:23.358: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Aug  5 14:36:23.392: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug  5 14:36:23.462: INFO: Waiting for terminating namespaces to be deleted...
Aug  5 14:36:23.463: INFO: 
Logging pods the kubelet thinks is on node iruya-worker before test
Aug  5 14:36:23.466: INFO: kindnet-k7tjm from kube-system started at 2020-07-19 21:16:08 +0000 UTC (1 container statuses recorded)
Aug  5 14:36:23.466: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug  5 14:36:23.466: INFO: kube-proxy-jzrnl from kube-system started at 2020-07-19 21:16:08 +0000 UTC (1 container statuses recorded)
Aug  5 14:36:23.466: INFO: 	Container kube-proxy ready: true, restart count 0
Aug  5 14:36:23.466: INFO: 
Logging pods the kubelet thinks is on node iruya-worker2 before test
Aug  5 14:36:23.471: INFO: kube-proxy-9ktgx from kube-system started at 2020-07-19 21:16:10 +0000 UTC (1 container statuses recorded)
Aug  5 14:36:23.471: INFO: 	Container kube-proxy ready: true, restart count 0
Aug  5 14:36:23.471: INFO: kindnet-8kg9z from kube-system started at 2020-07-19 21:16:09 +0000 UTC (1 container statuses recorded)
Aug  5 14:36:23.471: INFO: 	Container kindnet-cni ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: verifying the node has the label node iruya-worker
STEP: verifying the node has the label node iruya-worker2
Aug  5 14:36:23.533: INFO: Pod kindnet-8kg9z requesting resource cpu=100m on Node iruya-worker2
Aug  5 14:36:23.533: INFO: Pod kindnet-k7tjm requesting resource cpu=100m on Node iruya-worker
Aug  5 14:36:23.533: INFO: Pod kube-proxy-9ktgx requesting resource cpu=0m on Node iruya-worker2
Aug  5 14:36:23.533: INFO: Pod kube-proxy-jzrnl requesting resource cpu=0m on Node iruya-worker
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-2c635113-55e7-42dd-b4ab-5baa7c98e73c.162865f7f50972cc], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3230/filler-pod-2c635113-55e7-42dd-b4ab-5baa7c98e73c to iruya-worker]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-2c635113-55e7-42dd-b4ab-5baa7c98e73c.162865f9794f7942], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-2c635113-55e7-42dd-b4ab-5baa7c98e73c.162865f9d71e5f38], Reason = [Created], Message = [Created container filler-pod-2c635113-55e7-42dd-b4ab-5baa7c98e73c]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-2c635113-55e7-42dd-b4ab-5baa7c98e73c.162865f9e90a1f7a], Reason = [Started], Message = [Started container filler-pod-2c635113-55e7-42dd-b4ab-5baa7c98e73c]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-b0990f66-34b9-45ac-bfcd-d5a4496a8d31.162865f7f561ff1e], Reason = [Scheduled], Message = [Successfully assigned sched-pred-3230/filler-pod-b0990f66-34b9-45ac-bfcd-d5a4496a8d31 to iruya-worker2]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-b0990f66-34b9-45ac-bfcd-d5a4496a8d31.162865f97a5dd84a], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-b0990f66-34b9-45ac-bfcd-d5a4496a8d31.162865f9d389d4c6], Reason = [Created], Message = [Created container filler-pod-b0990f66-34b9-45ac-bfcd-d5a4496a8d31]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-b0990f66-34b9-45ac-bfcd-d5a4496a8d31.162865f9e34e642d], Reason = [Started], Message = [Started container filler-pod-b0990f66-34b9-45ac-bfcd-d5a4496a8d31]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.162865fa4a30b680], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.]
STEP: removing the label node off the node iruya-worker
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node iruya-worker2
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 14:36:34.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-3230" for this suite.
Aug  5 14:36:40.686: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 14:36:40.742: INFO: namespace sched-pred-3230 deletion completed in 6.089814708s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:17.384 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug  5 14:36:40.742: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Aug  5 14:36:40.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-136'
Aug  5 14:36:43.784: INFO: stderr: ""
Aug  5 14:36:43.784: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug  5 14:36:43.784: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-136'
Aug  5 14:36:43.884: INFO: stderr: ""
Aug  5 14:36:43.884: INFO: stdout: "update-demo-nautilus-jrlpt update-demo-nautilus-wkbtf "
Aug  5 14:36:43.885: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jrlpt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-136'
Aug  5 14:36:43.962: INFO: stderr: ""
Aug  5 14:36:43.962: INFO: stdout: ""
Aug  5 14:36:43.962: INFO: update-demo-nautilus-jrlpt is created but not running
Aug  5 14:36:48.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-136'
Aug  5 14:36:49.064: INFO: stderr: ""
Aug  5 14:36:49.064: INFO: stdout: "update-demo-nautilus-jrlpt update-demo-nautilus-wkbtf "
Aug  5 14:36:49.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jrlpt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-136'
Aug  5 14:36:49.158: INFO: stderr: ""
Aug  5 14:36:49.158: INFO: stdout: "true"
Aug  5 14:36:49.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jrlpt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-136'
Aug  5 14:36:49.250: INFO: stderr: ""
Aug  5 14:36:49.250: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug  5 14:36:49.250: INFO: validating pod update-demo-nautilus-jrlpt
Aug  5 14:36:49.253: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug  5 14:36:49.253: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug  5 14:36:49.253: INFO: update-demo-nautilus-jrlpt is verified up and running
Aug  5 14:36:49.253: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wkbtf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-136'
Aug  5 14:36:49.346: INFO: stderr: ""
Aug  5 14:36:49.346: INFO: stdout: ""
Aug  5 14:36:49.346: INFO: update-demo-nautilus-wkbtf is created but not running
Aug  5 14:36:54.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-136'
Aug  5 14:36:54.443: INFO: stderr: ""
Aug  5 14:36:54.443: INFO: stdout: "update-demo-nautilus-jrlpt update-demo-nautilus-wkbtf "
Aug  5 14:36:54.443: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jrlpt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-136'
Aug  5 14:36:54.532: INFO: stderr: ""
Aug  5 14:36:54.532: INFO: stdout: "true"
Aug  5 14:36:54.532: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jrlpt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-136'
Aug  5 14:36:54.629: INFO: stderr: ""
Aug  5 14:36:54.629: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug  5 14:36:54.629: INFO: validating pod update-demo-nautilus-jrlpt
Aug  5 14:36:54.631: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug  5 14:36:54.631: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug  5 14:36:54.631: INFO: update-demo-nautilus-jrlpt is verified up and running
Aug  5 14:36:54.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wkbtf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-136'
Aug  5 14:36:54.726: INFO: stderr: ""
Aug  5 14:36:54.726: INFO: stdout: "true"
Aug  5 14:36:54.726: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wkbtf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-136'
Aug  5 14:36:54.821: INFO: stderr: ""
Aug  5 14:36:54.821: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug  5 14:36:54.821: INFO: validating pod update-demo-nautilus-wkbtf
Aug  5 14:36:54.824: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug  5 14:36:54.824: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug  5 14:36:54.824: INFO: update-demo-nautilus-wkbtf is verified up and running
STEP: using delete to clean up resources
Aug  5 14:36:54.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-136'
Aug  5 14:36:54.927: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug  5 14:36:54.927: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Aug  5 14:36:54.927: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-136'
Aug  5 14:36:55.030: INFO: stderr: "No resources found.\n"
Aug  5 14:36:55.030: INFO: stdout: ""
Aug  5 14:36:55.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-136 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Aug  5 14:36:55.123: INFO: stderr: ""
Aug  5 14:36:55.123: INFO: stdout: "update-demo-nautilus-jrlpt\nupdate-demo-nautilus-wkbtf\n"
Aug  5 14:36:55.624: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-136'
Aug  5 14:36:55.721: INFO: stderr: "No resources found.\n"
Aug  5 14:36:55.721: INFO: stdout: ""
Aug  5 14:36:55.721: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-136 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Aug  5 14:36:55.863: INFO: stderr: ""
Aug  5 14:36:55.863: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug  5 14:36:55.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-136" for this suite.
Aug  5 14:37:21.946: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug  5 14:37:22.002: INFO: namespace kubectl-136 deletion completed in 26.135396449s

• [SLOW TEST:41.259 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSAug  5 14:37:22.002: INFO: Running AfterSuite actions on all nodes
Aug  5 14:37:22.002: INFO: Running AfterSuite actions on node 1
Aug  5 14:37:22.002: INFO: Skipping dumping logs from cluster

Ran 215 of 4413 Specs in 6090.691 seconds
SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4198 Skipped
PASS