I0209 12:56:09.348895 8 e2e.go:243] Starting e2e run "06b627d1-debe-4764-8f4b-2f9996ebffea" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1581252968 - Will randomize all specs Will run 215 of 4412 specs Feb 9 12:56:09.629: INFO: >>> kubeConfig: /root/.kube/config Feb 9 12:56:09.633: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Feb 9 12:56:09.670: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Feb 9 12:56:09.728: INFO: 10 / 10 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Feb 9 12:56:09.729: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Feb 9 12:56:09.729: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Feb 9 12:56:09.744: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Feb 9 12:56:09.744: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Feb 9 12:56:09.744: INFO: e2e test version: v1.15.7 Feb 9 12:56:09.746: INFO: kube-apiserver version: v1.15.1 SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 9 12:56:09.746: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl Feb 9 12:56:09.830: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Feb 9 12:56:09.834: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2817' Feb 9 12:56:11.956: INFO: stderr: "" Feb 9 12:56:11.956: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Feb 9 12:56:13.006: INFO: Selector matched 1 pods for map[app:redis] Feb 9 12:56:13.006: INFO: Found 0 / 1 Feb 9 12:56:13.970: INFO: Selector matched 1 pods for map[app:redis] Feb 9 12:56:13.970: INFO: Found 0 / 1 Feb 9 12:56:14.963: INFO: Selector matched 1 pods for map[app:redis] Feb 9 12:56:14.964: INFO: Found 0 / 1 Feb 9 12:56:15.995: INFO: Selector matched 1 pods for map[app:redis] Feb 9 12:56:15.995: INFO: Found 0 / 1 Feb 9 12:56:16.965: INFO: Selector matched 1 pods for map[app:redis] Feb 9 12:56:16.965: INFO: Found 0 / 1 Feb 9 12:56:17.967: INFO: Selector matched 1 pods for map[app:redis] Feb 9 12:56:17.968: INFO: Found 0 / 1 Feb 9 12:56:18.974: INFO: Selector matched 1 pods for map[app:redis] Feb 9 12:56:18.974: INFO: Found 0 / 1 Feb 9 12:56:19.965: INFO: Selector matched 1 pods for map[app:redis] Feb 9 12:56:19.965: INFO: Found 0 / 1 Feb 9 12:56:20.965: INFO: Selector matched 1 pods for map[app:redis] Feb 9 12:56:20.965: INFO: Found 0 / 1 Feb 9 12:56:21.969: INFO: Selector matched 1 pods for map[app:redis] Feb 9 12:56:21.969: INFO: Found 0 / 1 Feb 9 12:56:22.970: INFO: Selector matched 1 pods for map[app:redis] Feb 9 12:56:22.970: INFO: Found 1 / 1 Feb 9 12:56:22.970: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Feb 9 12:56:22.975: INFO: Selector matched 1 pods for map[app:redis] Feb 9 12:56:22.975: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Feb 9 12:56:22.975: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-zfsgr --namespace=kubectl-2817 -p {"metadata":{"annotations":{"x":"y"}}}' Feb 9 12:56:23.169: INFO: stderr: "" Feb 9 12:56:23.170: INFO: stdout: "pod/redis-master-zfsgr patched\n" STEP: checking annotations Feb 9 12:56:23.246: INFO: Selector matched 1 pods for map[app:redis] Feb 9 12:56:23.246: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 9 12:56:23.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2817" for this suite. Feb 9 12:56:45.300: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 12:56:45.377: INFO: namespace kubectl-2817 deletion completed in 22.126482032s • [SLOW TEST:35.631 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 9 12:56:45.378: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 9 12:56:45.497: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 9 12:56:46.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2137" for this suite. Feb 9 12:56:52.645: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 12:56:52.764: INFO: namespace custom-resource-definition-2137 deletion completed in 6.148398671s • [SLOW TEST:7.387 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 9 12:56:52.765: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 9 12:56:52.900: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ac9c75c8-0773-4919-86c2-bc5b15908a89" in namespace "projected-9584" to be "success or failure" Feb 9 12:56:52.938: INFO: Pod "downwardapi-volume-ac9c75c8-0773-4919-86c2-bc5b15908a89": Phase="Pending", Reason="", readiness=false. Elapsed: 37.615332ms Feb 9 12:56:54.954: INFO: Pod "downwardapi-volume-ac9c75c8-0773-4919-86c2-bc5b15908a89": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053566914s Feb 9 12:56:56.965: INFO: Pod "downwardapi-volume-ac9c75c8-0773-4919-86c2-bc5b15908a89": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064131418s Feb 9 12:56:58.974: INFO: Pod "downwardapi-volume-ac9c75c8-0773-4919-86c2-bc5b15908a89": Phase="Pending", Reason="", readiness=false. Elapsed: 6.073846199s Feb 9 12:57:00.989: INFO: Pod "downwardapi-volume-ac9c75c8-0773-4919-86c2-bc5b15908a89": Phase="Pending", Reason="", readiness=false. Elapsed: 8.08845375s Feb 9 12:57:02.999: INFO: Pod "downwardapi-volume-ac9c75c8-0773-4919-86c2-bc5b15908a89": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.098583584s STEP: Saw pod success Feb 9 12:57:02.999: INFO: Pod "downwardapi-volume-ac9c75c8-0773-4919-86c2-bc5b15908a89" satisfied condition "success or failure" Feb 9 12:57:03.006: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-ac9c75c8-0773-4919-86c2-bc5b15908a89 container client-container: STEP: delete the pod Feb 9 12:57:03.097: INFO: Waiting for pod downwardapi-volume-ac9c75c8-0773-4919-86c2-bc5b15908a89 to disappear Feb 9 12:57:03.188: INFO: Pod downwardapi-volume-ac9c75c8-0773-4919-86c2-bc5b15908a89 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 9 12:57:03.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9584" for this suite. Feb 9 12:57:09.263: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 12:57:09.371: INFO: namespace projected-9584 deletion completed in 6.173422918s • [SLOW TEST:16.606 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 9 12:57:09.371: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: executing a command with run --rm and attach with stdin Feb 9 12:57:09.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4242 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Feb 9 12:57:23.189: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0209 12:57:21.207521 81 log.go:172] (0xc0009060b0) (0xc00087c140) Create stream\nI0209 12:57:21.207733 81 log.go:172] (0xc0009060b0) (0xc00087c140) Stream added, broadcasting: 1\nI0209 12:57:21.218212 81 log.go:172] (0xc0009060b0) Reply frame received for 1\nI0209 12:57:21.218245 81 log.go:172] (0xc0009060b0) (0xc00073c1e0) Create stream\nI0209 12:57:21.218252 81 log.go:172] (0xc0009060b0) (0xc00073c1e0) Stream added, broadcasting: 3\nI0209 12:57:21.219650 81 log.go:172] (0xc0009060b0) Reply frame received for 3\nI0209 12:57:21.219683 81 log.go:172] (0xc0009060b0) (0xc00087c000) Create stream\nI0209 12:57:21.219695 81 log.go:172] (0xc0009060b0) (0xc00087c000) Stream added, broadcasting: 5\nI0209 12:57:21.222518 81 log.go:172] (0xc0009060b0) Reply frame received for 5\nI0209 12:57:21.222542 81 log.go:172] (0xc0009060b0) (0xc00073c280) Create stream\nI0209 12:57:21.222575 81 log.go:172] (0xc0009060b0) (0xc00073c280) Stream added, broadcasting: 7\nI0209 12:57:21.224190 81 log.go:172] (0xc0009060b0) Reply frame received for 7\nI0209 12:57:21.224292 81 log.go:172] (0xc00073c1e0) (3) Writing data frame\nI0209 12:57:21.224556 81 log.go:172] (0xc00073c1e0) (3) Writing data frame\nI0209 12:57:21.243096 81 log.go:172] (0xc0009060b0) Data frame received for 5\nI0209 12:57:21.243133 81 log.go:172] (0xc00087c000) (5) Data frame handling\nI0209 12:57:21.243150 81 log.go:172] (0xc00087c000) (5) Data frame sent\nI0209 12:57:21.247527 81 log.go:172] (0xc0009060b0) Data frame received for 5\nI0209 12:57:21.247541 81 log.go:172] (0xc00087c000) (5) Data frame handling\nI0209 12:57:21.247556 81 log.go:172] (0xc00087c000) (5) Data frame sent\nI0209 12:57:23.137648 81 log.go:172] (0xc0009060b0) Data frame received for 1\nI0209 12:57:23.138123 81 log.go:172] (0xc00087c140) (1) Data frame handling\nI0209 12:57:23.138199 81 log.go:172] (0xc00087c140) (1) Data frame sent\nI0209 12:57:23.138732 81 log.go:172] (0xc0009060b0) (0xc00073c280) Stream removed, broadcasting: 7\nI0209 12:57:23.138875 81 log.go:172] (0xc0009060b0) (0xc00087c000) Stream removed, broadcasting: 5\nI0209 12:57:23.138925 81 log.go:172] (0xc0009060b0) (0xc00087c140) Stream removed, broadcasting: 1\nI0209 12:57:23.139008 81 log.go:172] (0xc0009060b0) (0xc00087c140) Stream removed, broadcasting: 1\nI0209 12:57:23.139031 81 log.go:172] (0xc0009060b0) (0xc00073c1e0) Stream removed, broadcasting: 3\nI0209 12:57:23.139050 81 log.go:172] (0xc0009060b0) (0xc00087c000) Stream removed, broadcasting: 5\nI0209 12:57:23.139076 81 log.go:172] (0xc0009060b0) (0xc00073c280) Stream removed, broadcasting: 7\nI0209 12:57:23.139927 81 log.go:172] (0xc0009060b0) (0xc00073c1e0) Stream removed, broadcasting: 3\nI0209 12:57:23.140004 81 log.go:172] (0xc0009060b0) Go away received\n" Feb 9 12:57:23.189: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 9 12:57:25.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4242" for this suite. Feb 9 12:57:31.238: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 12:57:31.344: INFO: namespace kubectl-4242 deletion completed in 6.140361247s • [SLOW TEST:21.973 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 9 12:57:31.345: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Feb 9 12:57:42.090: INFO: Successfully updated pod "annotationupdate8dc5ddad-4efa-4f75-a3ca-2d263ba15103" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 9 12:57:44.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1066" for this suite. Feb 9 12:58:08.319: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 12:58:08.441: INFO: namespace projected-1066 deletion completed in 24.152387665s • [SLOW TEST:37.097 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 9 12:58:08.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Feb 9 12:58:08.635: INFO: Number of nodes with available pods: 0 Feb 9 12:58:08.635: INFO: Node iruya-node is running more than one daemon pod Feb 9 12:58:09.654: INFO: Number of nodes with available pods: 0 Feb 9 12:58:09.654: INFO: Node iruya-node is running more than one daemon pod Feb 9 12:58:11.064: INFO: Number of nodes with available pods: 0 Feb 9 12:58:11.065: INFO: Node iruya-node is running more than one daemon pod Feb 9 12:58:12.078: INFO: Number of nodes with available pods: 0 Feb 9 12:58:12.078: INFO: Node iruya-node is running more than one daemon pod Feb 9 12:58:12.656: INFO: Number of nodes with available pods: 0 Feb 9 12:58:12.656: INFO: Node iruya-node is running more than one daemon pod Feb 9 12:58:13.655: INFO: Number of nodes with available pods: 0 Feb 9 12:58:13.656: INFO: Node iruya-node is running more than one daemon pod Feb 9 12:58:14.655: INFO: Number of nodes with available pods: 0 Feb 9 12:58:14.655: INFO: Node iruya-node is running more than one daemon pod Feb 9 12:58:15.808: INFO: Number of nodes with available pods: 0 Feb 9 12:58:15.808: INFO: Node iruya-node is running more than one daemon pod Feb 9 12:58:16.649: INFO: Number of nodes with available pods: 0 Feb 9 12:58:16.649: INFO: Node iruya-node is running more than one daemon pod Feb 9 12:58:17.652: INFO: Number of nodes with available pods: 0 Feb 9 12:58:17.652: INFO: Node iruya-node is running more than one daemon pod Feb 9 12:58:18.715: INFO: Number of nodes with available pods: 0 Feb 9 12:58:18.715: INFO: Node iruya-node is running more than one daemon pod Feb 9 12:58:19.655: INFO: Number of nodes with available pods: 0 Feb 9 12:58:19.655: INFO: Node iruya-node is running more than one daemon pod Feb 9 12:58:20.651: INFO: Number of nodes with available pods: 2 Feb 9 12:58:20.651: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Feb 9 12:58:20.685: INFO: Number of nodes with available pods: 1 Feb 9 12:58:20.685: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Feb 9 12:58:21.706: INFO: Number of nodes with available pods: 1 Feb 9 12:58:21.706: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Feb 9 12:58:22.769: INFO: Number of nodes with available pods: 1 Feb 9 12:58:22.769: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Feb 9 12:58:23.698: INFO: Number of nodes with available pods: 1 Feb 9 12:58:23.698: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Feb 9 12:58:24.842: INFO: Number of nodes with available pods: 1 Feb 9 12:58:24.842: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Feb 9 12:58:25.697: INFO: Number of nodes with available pods: 1 Feb 9 12:58:25.697: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Feb 9 12:58:26.705: INFO: Number of nodes with available pods: 1 Feb 9 12:58:26.705: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Feb 9 12:58:27.792: INFO: Number of nodes with available pods: 1 Feb 9 12:58:27.792: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Feb 9 12:58:28.713: INFO: Number of nodes with available pods: 1 Feb 9 12:58:28.713: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Feb 9 12:58:29.703: INFO: Number of nodes with available pods: 1 Feb 9 12:58:29.703: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Feb 9 12:58:30.700: INFO: Number of nodes with available pods: 1 Feb 9 12:58:30.700: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Feb 9 12:58:31.706: INFO: Number of nodes with available pods: 1 Feb 9 12:58:31.706: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Feb 9 12:58:32.698: INFO: Number of nodes with available pods: 1 Feb 9 12:58:32.698: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Feb 9 12:58:33.703: INFO: Number of nodes with available pods: 1 Feb 9 12:58:33.703: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Feb 9 12:58:34.700: INFO: Number of nodes with available pods: 1 Feb 9 12:58:34.700: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Feb 9 12:58:35.705: INFO: Number of nodes with available pods: 1 Feb 9 12:58:35.705: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Feb 9 12:58:36.700: INFO: Number of nodes with available pods: 1 Feb 9 12:58:36.700: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Feb 9 12:58:37.697: INFO: Number of nodes with available pods: 1 Feb 9 12:58:37.697: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Feb 9 12:58:38.705: INFO: Number of nodes with available pods: 1 Feb 9 12:58:38.706: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Feb 9 12:58:39.704: INFO: Number of nodes with available pods: 1 Feb 9 12:58:39.704: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Feb 9 12:58:40.718: INFO: Number of nodes with available pods: 1 Feb 9 12:58:40.719: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Feb 9 12:58:42.076: INFO: Number of nodes with available pods: 1 Feb 9 12:58:42.076: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Feb 9 12:58:42.701: INFO: Number of nodes with available pods: 1 Feb 9 12:58:42.701: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Feb 9 12:58:43.706: INFO: Number of nodes with available pods: 1 Feb 9 12:58:43.706: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Feb 9 12:58:44.712: INFO: Number of nodes with available pods: 1 Feb 9 12:58:44.712: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Feb 9 12:58:45.714: INFO: Number of nodes with available pods: 2 Feb 9 12:58:45.714: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6229, will wait for the garbage collector to delete the pods Feb 9 12:58:45.803: INFO: Deleting DaemonSet.extensions daemon-set took: 29.655825ms Feb 9 12:58:46.104: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.961082ms Feb 9 12:58:54.424: INFO: Number of nodes with available pods: 0 Feb 9 12:58:54.424: INFO: Number of running nodes: 0, number of available pods: 0 Feb 9 12:58:54.431: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6229/daemonsets","resourceVersion":"23691897"},"items":null} Feb 9 12:58:54.435: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6229/pods","resourceVersion":"23691897"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 9 12:58:54.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6229" for this suite. Feb 9 12:59:00.488: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 12:59:00.614: INFO: namespace daemonsets-6229 deletion completed in 6.154732707s • [SLOW TEST:52.172 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 9 12:59:00.615: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 9 12:59:00.683: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ebb016c2-3fcc-46d4-9c7a-592598e08980" in namespace "downward-api-7788" to be "success or failure" Feb 9 12:59:00.695: INFO: Pod "downwardapi-volume-ebb016c2-3fcc-46d4-9c7a-592598e08980": Phase="Pending", Reason="", readiness=false. Elapsed: 12.210288ms Feb 9 12:59:02.704: INFO: Pod "downwardapi-volume-ebb016c2-3fcc-46d4-9c7a-592598e08980": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021111285s Feb 9 12:59:04.723: INFO: Pod "downwardapi-volume-ebb016c2-3fcc-46d4-9c7a-592598e08980": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039616397s Feb 9 12:59:06.731: INFO: Pod "downwardapi-volume-ebb016c2-3fcc-46d4-9c7a-592598e08980": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048412191s Feb 9 12:59:08.747: INFO: Pod "downwardapi-volume-ebb016c2-3fcc-46d4-9c7a-592598e08980": Phase="Pending", Reason="", readiness=false. Elapsed: 8.063967838s Feb 9 12:59:10.773: INFO: Pod "downwardapi-volume-ebb016c2-3fcc-46d4-9c7a-592598e08980": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.090291443s STEP: Saw pod success Feb 9 12:59:10.774: INFO: Pod "downwardapi-volume-ebb016c2-3fcc-46d4-9c7a-592598e08980" satisfied condition "success or failure" Feb 9 12:59:10.785: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-ebb016c2-3fcc-46d4-9c7a-592598e08980 container client-container: STEP: delete the pod Feb 9 12:59:11.016: INFO: Waiting for pod downwardapi-volume-ebb016c2-3fcc-46d4-9c7a-592598e08980 to disappear Feb 9 12:59:11.019: INFO: Pod downwardapi-volume-ebb016c2-3fcc-46d4-9c7a-592598e08980 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 9 12:59:11.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7788" for this suite. Feb 9 12:59:17.047: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 12:59:17.154: INFO: namespace downward-api-7788 deletion completed in 6.130149477s • [SLOW TEST:16.539 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 9 12:59:17.155: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Feb 9 12:59:37.386: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 9 12:59:37.407: INFO: Pod pod-with-prestop-exec-hook still exists Feb 9 12:59:39.407: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 9 12:59:39.416: INFO: Pod pod-with-prestop-exec-hook still exists Feb 9 12:59:41.407: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 9 12:59:41.417: INFO: Pod pod-with-prestop-exec-hook still exists Feb 9 12:59:43.407: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 9 12:59:43.420: INFO: Pod pod-with-prestop-exec-hook still exists Feb 9 12:59:45.407: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 9 12:59:45.426: INFO: Pod pod-with-prestop-exec-hook still exists Feb 9 12:59:47.407: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 9 12:59:47.419: INFO: Pod pod-with-prestop-exec-hook still exists Feb 9 12:59:49.407: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 9 12:59:49.419: INFO: Pod pod-with-prestop-exec-hook still exists Feb 9 12:59:51.407: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 9 12:59:51.416: INFO: Pod pod-with-prestop-exec-hook still exists Feb 9 12:59:53.407: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 9 12:59:53.413: INFO: Pod pod-with-prestop-exec-hook still exists Feb 9 12:59:55.407: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 9 12:59:55.418: INFO: Pod pod-with-prestop-exec-hook still exists Feb 9 12:59:57.407: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 9 12:59:57.420: INFO: Pod pod-with-prestop-exec-hook still exists Feb 9 12:59:59.407: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 9 12:59:59.417: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 9 12:59:59.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9766" for this suite. Feb 9 13:00:23.485: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 13:00:23.598: INFO: namespace container-lifecycle-hook-9766 deletion completed in 24.137623402s • [SLOW TEST:66.444 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 9 13:00:23.599: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 9 13:00:33.937: INFO: Waiting up to 5m0s for pod "client-envvars-3f1803e1-7eee-4105-9394-88afa9e77958" in namespace "pods-534" to be "success or failure" Feb 9 13:00:33.957: INFO: Pod "client-envvars-3f1803e1-7eee-4105-9394-88afa9e77958": Phase="Pending", Reason="", readiness=false. Elapsed: 18.551984ms Feb 9 13:00:35.969: INFO: Pod "client-envvars-3f1803e1-7eee-4105-9394-88afa9e77958": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030940158s Feb 9 13:00:37.975: INFO: Pod "client-envvars-3f1803e1-7eee-4105-9394-88afa9e77958": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037155827s Feb 9 13:00:39.984: INFO: Pod "client-envvars-3f1803e1-7eee-4105-9394-88afa9e77958": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046084226s Feb 9 13:00:41.996: INFO: Pod "client-envvars-3f1803e1-7eee-4105-9394-88afa9e77958": Phase="Pending", Reason="", readiness=false. Elapsed: 8.057670484s Feb 9 13:00:44.008: INFO: Pod "client-envvars-3f1803e1-7eee-4105-9394-88afa9e77958": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.069878843s STEP: Saw pod success Feb 9 13:00:44.008: INFO: Pod "client-envvars-3f1803e1-7eee-4105-9394-88afa9e77958" satisfied condition "success or failure" Feb 9 13:00:44.013: INFO: Trying to get logs from node iruya-node pod client-envvars-3f1803e1-7eee-4105-9394-88afa9e77958 container env3cont: STEP: delete the pod Feb 9 13:00:44.123: INFO: Waiting for pod client-envvars-3f1803e1-7eee-4105-9394-88afa9e77958 to disappear Feb 9 13:00:44.129: INFO: Pod client-envvars-3f1803e1-7eee-4105-9394-88afa9e77958 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 9 13:00:44.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-534" for this suite. Feb 9 13:01:28.245: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 13:01:28.367: INFO: namespace pods-534 deletion completed in 44.23137156s • [SLOW TEST:64.769 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 9 13:01:28.368: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Feb 9 13:01:28.572: INFO: Waiting up to 5m0s for pod "downward-api-17b0d87f-27be-4945-969e-a1fec318bfac" in namespace "downward-api-6550" to be "success or failure" Feb 9 13:01:28.585: INFO: Pod "downward-api-17b0d87f-27be-4945-969e-a1fec318bfac": Phase="Pending", Reason="", readiness=false. Elapsed: 12.821978ms Feb 9 13:01:30.607: INFO: Pod "downward-api-17b0d87f-27be-4945-969e-a1fec318bfac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033977129s Feb 9 13:01:32.628: INFO: Pod "downward-api-17b0d87f-27be-4945-969e-a1fec318bfac": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055525446s Feb 9 13:01:34.640: INFO: Pod "downward-api-17b0d87f-27be-4945-969e-a1fec318bfac": Phase="Pending", Reason="", readiness=false. Elapsed: 6.067227361s Feb 9 13:01:36.649: INFO: Pod "downward-api-17b0d87f-27be-4945-969e-a1fec318bfac": Phase="Pending", Reason="", readiness=false. Elapsed: 8.076315296s Feb 9 13:01:38.658: INFO: Pod "downward-api-17b0d87f-27be-4945-969e-a1fec318bfac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.085690813s STEP: Saw pod success Feb 9 13:01:38.659: INFO: Pod "downward-api-17b0d87f-27be-4945-969e-a1fec318bfac" satisfied condition "success or failure" Feb 9 13:01:38.662: INFO: Trying to get logs from node iruya-node pod downward-api-17b0d87f-27be-4945-969e-a1fec318bfac container dapi-container: STEP: delete the pod Feb 9 13:01:38.713: INFO: Waiting for pod downward-api-17b0d87f-27be-4945-969e-a1fec318bfac to disappear Feb 9 13:01:38.719: INFO: Pod downward-api-17b0d87f-27be-4945-969e-a1fec318bfac no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 9 13:01:38.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6550" for this suite. Feb 9 13:01:44.865: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 13:01:45.079: INFO: namespace downward-api-6550 deletion completed in 6.354893262s • [SLOW TEST:16.711 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 9 13:01:45.080: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 9 13:01:45.256: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c9a528fe-4df6-462b-b265-479018d66e3d" in namespace "projected-3014" to be "success or failure" Feb 9 13:01:45.264: INFO: Pod "downwardapi-volume-c9a528fe-4df6-462b-b265-479018d66e3d": Phase="Pending", Reason="", readiness=false. Elapsed: 7.992071ms Feb 9 13:01:47.273: INFO: Pod "downwardapi-volume-c9a528fe-4df6-462b-b265-479018d66e3d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016555177s Feb 9 13:01:49.553: INFO: Pod "downwardapi-volume-c9a528fe-4df6-462b-b265-479018d66e3d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.296936667s Feb 9 13:01:51.566: INFO: Pod "downwardapi-volume-c9a528fe-4df6-462b-b265-479018d66e3d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.309479572s Feb 9 13:01:53.578: INFO: Pod "downwardapi-volume-c9a528fe-4df6-462b-b265-479018d66e3d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.321506423s Feb 9 13:01:55.595: INFO: Pod "downwardapi-volume-c9a528fe-4df6-462b-b265-479018d66e3d": Phase="Running", Reason="", readiness=true. Elapsed: 10.338903022s Feb 9 13:01:57.606: INFO: Pod "downwardapi-volume-c9a528fe-4df6-462b-b265-479018d66e3d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.349476857s STEP: Saw pod success Feb 9 13:01:57.606: INFO: Pod "downwardapi-volume-c9a528fe-4df6-462b-b265-479018d66e3d" satisfied condition "success or failure" Feb 9 13:01:57.612: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-c9a528fe-4df6-462b-b265-479018d66e3d container client-container: STEP: delete the pod Feb 9 13:01:57.732: INFO: Waiting for pod downwardapi-volume-c9a528fe-4df6-462b-b265-479018d66e3d to disappear Feb 9 13:01:57.783: INFO: Pod downwardapi-volume-c9a528fe-4df6-462b-b265-479018d66e3d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 9 13:01:57.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3014" for this suite. Feb 9 13:02:03.826: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 13:02:03.980: INFO: namespace projected-3014 deletion completed in 6.185737497s • [SLOW TEST:18.900 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 9 13:02:03.981: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Feb 9 13:02:04.297: INFO: Number of nodes with available pods: 0 Feb 9 13:02:04.297: INFO: Node iruya-node is running more than one daemon pod Feb 9 13:02:05.864: INFO: Number of nodes with available pods: 0 Feb 9 13:02:05.864: INFO: Node iruya-node is running more than one daemon pod Feb 9 13:02:06.322: INFO: Number of nodes with available pods: 0 Feb 9 13:02:06.323: INFO: Node iruya-node is running more than one daemon pod Feb 9 13:02:07.393: INFO: Number of nodes with available pods: 0 Feb 9 13:02:07.393: INFO: Node iruya-node is running more than one daemon pod Feb 9 13:02:08.358: INFO: Number of nodes with available pods: 0 Feb 9 13:02:08.358: INFO: Node iruya-node is running more than one daemon pod Feb 9 13:02:09.323: INFO: Number of nodes with available pods: 0 Feb 9 13:02:09.323: INFO: Node iruya-node is running more than one daemon pod Feb 9 13:02:10.333: INFO: Number of nodes with available pods: 0 Feb 9 13:02:10.333: INFO: Node iruya-node is running more than one daemon pod Feb 9 13:02:12.745: INFO: Number of nodes with available pods: 0 Feb 9 13:02:12.745: INFO: Node iruya-node is running more than one daemon pod Feb 9 13:02:13.313: INFO: Number of nodes with available pods: 0 Feb 9 13:02:13.313: INFO: Node iruya-node is running more than one daemon pod Feb 9 13:02:14.422: INFO: Number of nodes with available pods: 0 Feb 9 13:02:14.422: INFO: Node iruya-node is running more than one daemon pod Feb 9 13:02:15.308: INFO: Number of nodes with available pods: 1 Feb 9 13:02:15.308: INFO: Node iruya-node is running more than one daemon pod Feb 9 13:02:16.324: INFO: Number of nodes with available pods: 2 Feb 9 13:02:16.324: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Feb 9 13:02:16.436: INFO: Number of nodes with available pods: 1 Feb 9 13:02:16.436: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Feb 9 13:02:18.401: INFO: Number of nodes with available pods: 1 Feb 9 13:02:18.402: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Feb 9 13:02:18.449: INFO: Number of nodes with available pods: 1 Feb 9 13:02:18.449: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Feb 9 13:02:19.659: INFO: Number of nodes with available pods: 1 Feb 9 13:02:19.659: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Feb 9 13:02:20.951: INFO: Number of nodes with available pods: 1 Feb 9 13:02:20.951: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Feb 9 13:02:21.450: INFO: Number of nodes with available pods: 1 Feb 9 13:02:21.450: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Feb 9 13:02:22.448: INFO: Number of nodes with available pods: 1 Feb 9 13:02:22.448: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Feb 9 13:02:23.744: INFO: Number of nodes with available pods: 1 Feb 9 13:02:23.744: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Feb 9 13:02:24.455: INFO: Number of nodes with available pods: 1 Feb 9 13:02:24.455: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Feb 9 13:02:25.628: INFO: Number of nodes with available pods: 1 Feb 9 13:02:25.628: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Feb 9 13:02:26.465: INFO: Number of nodes with available pods: 1 Feb 9 13:02:26.465: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Feb 9 13:02:27.456: INFO: Number of nodes with available pods: 1 Feb 9 13:02:27.456: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Feb 9 13:02:28.464: INFO: Number of nodes with available pods: 2 Feb 9 13:02:28.465: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-459, will wait for the garbage collector to delete the pods Feb 9 13:02:28.574: INFO: Deleting DaemonSet.extensions daemon-set took: 48.461571ms Feb 9 13:02:28.875: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.963801ms Feb 9 13:02:37.011: INFO: Number of nodes with available pods: 0 Feb 9 13:02:37.011: INFO: Number of running nodes: 0, number of available pods: 0 Feb 9 13:02:37.018: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-459/daemonsets","resourceVersion":"23692425"},"items":null} Feb 9 13:02:37.023: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-459/pods","resourceVersion":"23692425"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 9 13:02:37.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-459" for this suite. Feb 9 13:02:43.081: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 13:02:43.193: INFO: namespace daemonsets-459 deletion completed in 6.147730513s • [SLOW TEST:39.212 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 9 13:02:43.193: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-b6848022-ab60-4018-a0c6-9427c327225b STEP: Creating a pod to test consume configMaps Feb 9 13:02:43.370: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-026f6623-57f2-470c-b42c-01d98b237fa1" in namespace "projected-7790" to be "success or failure" Feb 9 13:02:43.396: INFO: Pod "pod-projected-configmaps-026f6623-57f2-470c-b42c-01d98b237fa1": Phase="Pending", Reason="", readiness=false. Elapsed: 26.186918ms Feb 9 13:02:45.402: INFO: Pod "pod-projected-configmaps-026f6623-57f2-470c-b42c-01d98b237fa1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032063757s Feb 9 13:02:47.408: INFO: Pod "pod-projected-configmaps-026f6623-57f2-470c-b42c-01d98b237fa1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038008743s Feb 9 13:02:49.429: INFO: Pod "pod-projected-configmaps-026f6623-57f2-470c-b42c-01d98b237fa1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.058968111s Feb 9 13:02:51.438: INFO: Pod "pod-projected-configmaps-026f6623-57f2-470c-b42c-01d98b237fa1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.067483574s Feb 9 13:02:53.446: INFO: Pod "pod-projected-configmaps-026f6623-57f2-470c-b42c-01d98b237fa1": Phase="Pending", Reason="", readiness=false. Elapsed: 10.075605495s Feb 9 13:02:55.462: INFO: Pod "pod-projected-configmaps-026f6623-57f2-470c-b42c-01d98b237fa1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.091412413s STEP: Saw pod success Feb 9 13:02:55.462: INFO: Pod "pod-projected-configmaps-026f6623-57f2-470c-b42c-01d98b237fa1" satisfied condition "success or failure" Feb 9 13:02:55.467: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-026f6623-57f2-470c-b42c-01d98b237fa1 container projected-configmap-volume-test: STEP: delete the pod Feb 9 13:02:55.640: INFO: Waiting for pod pod-projected-configmaps-026f6623-57f2-470c-b42c-01d98b237fa1 to disappear Feb 9 13:02:55.719: INFO: Pod pod-projected-configmaps-026f6623-57f2-470c-b42c-01d98b237fa1 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 9 13:02:55.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7790" for this suite. Feb 9 13:03:01.768: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 13:03:01.978: INFO: namespace projected-7790 deletion completed in 6.249032236s • [SLOW TEST:18.785 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 9 13:03:01.979: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Feb 9 13:03:02.120: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 9 13:03:26.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2000" for this suite. Feb 9 13:03:32.620: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 13:03:32.757: INFO: namespace pods-2000 deletion completed in 6.217587062s • [SLOW TEST:30.778 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 9 13:03:32.758: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-eea37aad-47b3-4299-882a-25d4b0b3b4c0 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-eea37aad-47b3-4299-882a-25d4b0b3b4c0 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 9 13:05:00.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8701" for this suite. Feb 9 13:05:22.797: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 13:05:22.930: INFO: namespace configmap-8701 deletion completed in 22.158803321s • [SLOW TEST:110.173 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 9 13:05:22.931: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Feb 9 13:05:35.726: INFO: Successfully updated pod "labelsupdate45ae6c61-a2ce-417a-99cb-ae11ab52e1b6" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 9 13:05:37.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6754" for this suite. Feb 9 13:06:07.963: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 13:06:08.054: INFO: namespace downward-api-6754 deletion completed in 30.119251s • [SLOW TEST:45.123 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 9 13:06:08.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Feb 9 13:06:08.110: INFO: Waiting up to 5m0s for pod "pod-683a47b9-d9fb-4c11-9a21-04d256157a5c" in namespace "emptydir-8883" to be "success or failure" Feb 9 13:06:08.115: INFO: Pod "pod-683a47b9-d9fb-4c11-9a21-04d256157a5c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.998144ms Feb 9 13:06:10.152: INFO: Pod "pod-683a47b9-d9fb-4c11-9a21-04d256157a5c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04121081s Feb 9 13:06:12.180: INFO: Pod "pod-683a47b9-d9fb-4c11-9a21-04d256157a5c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069874201s Feb 9 13:06:14.196: INFO: Pod "pod-683a47b9-d9fb-4c11-9a21-04d256157a5c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.085569886s Feb 9 13:06:16.204: INFO: Pod "pod-683a47b9-d9fb-4c11-9a21-04d256157a5c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.093909116s Feb 9 13:06:18.215: INFO: Pod "pod-683a47b9-d9fb-4c11-9a21-04d256157a5c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.104485434s STEP: Saw pod success Feb 9 13:06:18.215: INFO: Pod "pod-683a47b9-d9fb-4c11-9a21-04d256157a5c" satisfied condition "success or failure" Feb 9 13:06:18.220: INFO: Trying to get logs from node iruya-node pod pod-683a47b9-d9fb-4c11-9a21-04d256157a5c container test-container: STEP: delete the pod Feb 9 13:06:18.399: INFO: Waiting for pod pod-683a47b9-d9fb-4c11-9a21-04d256157a5c to disappear Feb 9 13:06:18.413: INFO: Pod pod-683a47b9-d9fb-4c11-9a21-04d256157a5c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 9 13:06:18.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8883" for this suite. Feb 9 13:06:24.455: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 13:06:24.653: INFO: namespace emptydir-8883 deletion completed in 6.231011003s • [SLOW TEST:16.599 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 9 13:06:24.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Feb 9 13:06:24.723: INFO: Waiting up to 5m0s for pod "pod-2357f071-3cd7-4374-9c3c-c24eccc77c01" in namespace "emptydir-2934" to be "success or failure" Feb 9 13:06:24.776: INFO: Pod "pod-2357f071-3cd7-4374-9c3c-c24eccc77c01": Phase="Pending", Reason="", readiness=false. Elapsed: 52.544186ms Feb 9 13:06:26.790: INFO: Pod "pod-2357f071-3cd7-4374-9c3c-c24eccc77c01": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067157436s Feb 9 13:06:28.798: INFO: Pod "pod-2357f071-3cd7-4374-9c3c-c24eccc77c01": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074692525s Feb 9 13:06:30.809: INFO: Pod "pod-2357f071-3cd7-4374-9c3c-c24eccc77c01": Phase="Pending", Reason="", readiness=false. Elapsed: 6.085412627s Feb 9 13:06:32.815: INFO: Pod "pod-2357f071-3cd7-4374-9c3c-c24eccc77c01": Phase="Pending", Reason="", readiness=false. Elapsed: 8.091869439s Feb 9 13:06:34.826: INFO: Pod "pod-2357f071-3cd7-4374-9c3c-c24eccc77c01": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.102550098s STEP: Saw pod success Feb 9 13:06:34.826: INFO: Pod "pod-2357f071-3cd7-4374-9c3c-c24eccc77c01" satisfied condition "success or failure" Feb 9 13:06:34.831: INFO: Trying to get logs from node iruya-node pod pod-2357f071-3cd7-4374-9c3c-c24eccc77c01 container test-container: STEP: delete the pod Feb 9 13:06:34.937: INFO: Waiting for pod pod-2357f071-3cd7-4374-9c3c-c24eccc77c01 to disappear Feb 9 13:06:34.943: INFO: Pod pod-2357f071-3cd7-4374-9c3c-c24eccc77c01 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 9 13:06:34.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2934" for this suite. Feb 9 13:06:41.071: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 13:06:41.184: INFO: namespace emptydir-2934 deletion completed in 6.235030204s • [SLOW TEST:16.530 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 9 13:06:41.184: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-e3206951-0a2d-4f5a-9e47-5fa7211c16df STEP: Creating a pod to test consume configMaps Feb 9 13:06:41.398: INFO: Waiting up to 5m0s for pod "pod-configmaps-28799891-4a37-49a8-866d-012b5f643cab" in namespace "configmap-7730" to be "success or failure" Feb 9 13:06:41.463: INFO: Pod "pod-configmaps-28799891-4a37-49a8-866d-012b5f643cab": Phase="Pending", Reason="", readiness=false. Elapsed: 65.087482ms Feb 9 13:06:43.473: INFO: Pod "pod-configmaps-28799891-4a37-49a8-866d-012b5f643cab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074787795s Feb 9 13:06:45.488: INFO: Pod "pod-configmaps-28799891-4a37-49a8-866d-012b5f643cab": Phase="Pending", Reason="", readiness=false. Elapsed: 4.089920884s Feb 9 13:06:47.538: INFO: Pod "pod-configmaps-28799891-4a37-49a8-866d-012b5f643cab": Phase="Pending", Reason="", readiness=false. Elapsed: 6.139619086s Feb 9 13:06:49.548: INFO: Pod "pod-configmaps-28799891-4a37-49a8-866d-012b5f643cab": Phase="Pending", Reason="", readiness=false. Elapsed: 8.149783206s Feb 9 13:06:51.572: INFO: Pod "pod-configmaps-28799891-4a37-49a8-866d-012b5f643cab": Phase="Running", Reason="", readiness=true. Elapsed: 10.174035794s Feb 9 13:06:53.597: INFO: Pod "pod-configmaps-28799891-4a37-49a8-866d-012b5f643cab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.19895497s STEP: Saw pod success Feb 9 13:06:53.598: INFO: Pod "pod-configmaps-28799891-4a37-49a8-866d-012b5f643cab" satisfied condition "success or failure" Feb 9 13:06:53.602: INFO: Trying to get logs from node iruya-node pod pod-configmaps-28799891-4a37-49a8-866d-012b5f643cab container configmap-volume-test: STEP: delete the pod Feb 9 13:06:53.751: INFO: Waiting for pod pod-configmaps-28799891-4a37-49a8-866d-012b5f643cab to disappear Feb 9 13:06:53.762: INFO: Pod pod-configmaps-28799891-4a37-49a8-866d-012b5f643cab no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 9 13:06:53.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7730" for this suite. Feb 9 13:06:59.886: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 13:06:59.995: INFO: namespace configmap-7730 deletion completed in 6.140094737s • [SLOW TEST:18.811 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 9 13:06:59.996: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Feb 9 13:07:20.867: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 9 13:07:20.881: INFO: Pod pod-with-prestop-http-hook still exists Feb 9 13:07:22.881: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 9 13:07:22.890: INFO: Pod pod-with-prestop-http-hook still exists Feb 9 13:07:24.881: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 9 13:07:24.895: INFO: Pod pod-with-prestop-http-hook still exists Feb 9 13:07:26.882: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 9 13:07:26.898: INFO: Pod pod-with-prestop-http-hook still exists Feb 9 13:07:28.882: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 9 13:07:28.892: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 9 13:07:28.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-105" for this suite. Feb 9 13:07:50.979: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 13:07:51.115: INFO: namespace container-lifecycle-hook-105 deletion completed in 22.16237111s • [SLOW TEST:51.120 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 9 13:07:51.116: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Feb 9 13:08:01.538: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 9 13:08:01.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9084" for this suite. Feb 9 13:08:07.724: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 13:08:07.924: INFO: namespace container-runtime-9084 deletion completed in 6.216876138s • [SLOW TEST:16.809 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 9 13:08:07.926: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 9 13:08:08.095: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6d9ed2f7-0248-4a6e-a10a-027c41c642ce" in namespace "projected-6654" to be "success or failure" Feb 9 13:08:08.137: INFO: Pod "downwardapi-volume-6d9ed2f7-0248-4a6e-a10a-027c41c642ce": Phase="Pending", Reason="", readiness=false. Elapsed: 41.775072ms Feb 9 13:08:10.142: INFO: Pod "downwardapi-volume-6d9ed2f7-0248-4a6e-a10a-027c41c642ce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047123039s Feb 9 13:08:12.151: INFO: Pod "downwardapi-volume-6d9ed2f7-0248-4a6e-a10a-027c41c642ce": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055253695s Feb 9 13:08:14.159: INFO: Pod "downwardapi-volume-6d9ed2f7-0248-4a6e-a10a-027c41c642ce": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063244093s Feb 9 13:08:16.168: INFO: Pod "downwardapi-volume-6d9ed2f7-0248-4a6e-a10a-027c41c642ce": Phase="Pending", Reason="", readiness=false. Elapsed: 8.072712667s Feb 9 13:08:18.174: INFO: Pod "downwardapi-volume-6d9ed2f7-0248-4a6e-a10a-027c41c642ce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.078796937s STEP: Saw pod success Feb 9 13:08:18.174: INFO: Pod "downwardapi-volume-6d9ed2f7-0248-4a6e-a10a-027c41c642ce" satisfied condition "success or failure" Feb 9 13:08:18.177: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-6d9ed2f7-0248-4a6e-a10a-027c41c642ce container client-container: STEP: delete the pod Feb 9 13:08:18.225: INFO: Waiting for pod downwardapi-volume-6d9ed2f7-0248-4a6e-a10a-027c41c642ce to disappear Feb 9 13:08:18.246: INFO: Pod downwardapi-volume-6d9ed2f7-0248-4a6e-a10a-027c41c642ce no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 9 13:08:18.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6654" for this suite. Feb 9 13:08:24.304: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 13:08:24.523: INFO: namespace projected-6654 deletion completed in 6.268194295s • [SLOW TEST:16.597 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 9 13:08:24.523: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 9 13:08:24.668: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b5abadc9-46a6-491e-9a4a-2df838fce6d4" in namespace "downward-api-7513" to be "success or failure" Feb 9 13:08:24.673: INFO: Pod "downwardapi-volume-b5abadc9-46a6-491e-9a4a-2df838fce6d4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.744037ms Feb 9 13:08:26.683: INFO: Pod "downwardapi-volume-b5abadc9-46a6-491e-9a4a-2df838fce6d4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014812929s Feb 9 13:08:28.693: INFO: Pod "downwardapi-volume-b5abadc9-46a6-491e-9a4a-2df838fce6d4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024872392s Feb 9 13:08:30.701: INFO: Pod "downwardapi-volume-b5abadc9-46a6-491e-9a4a-2df838fce6d4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.032887983s Feb 9 13:08:32.709: INFO: Pod "downwardapi-volume-b5abadc9-46a6-491e-9a4a-2df838fce6d4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.041052615s Feb 9 13:08:34.718: INFO: Pod "downwardapi-volume-b5abadc9-46a6-491e-9a4a-2df838fce6d4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.049327319s Feb 9 13:08:36.724: INFO: Pod "downwardapi-volume-b5abadc9-46a6-491e-9a4a-2df838fce6d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.056157779s STEP: Saw pod success Feb 9 13:08:36.724: INFO: Pod "downwardapi-volume-b5abadc9-46a6-491e-9a4a-2df838fce6d4" satisfied condition "success or failure" Feb 9 13:08:36.729: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-b5abadc9-46a6-491e-9a4a-2df838fce6d4 container client-container: STEP: delete the pod Feb 9 13:08:36.782: INFO: Waiting for pod downwardapi-volume-b5abadc9-46a6-491e-9a4a-2df838fce6d4 to disappear Feb 9 13:08:36.788: INFO: Pod downwardapi-volume-b5abadc9-46a6-491e-9a4a-2df838fce6d4 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 9 13:08:36.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7513" for this suite. Feb 9 13:08:42.895: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 13:08:43.032: INFO: namespace downward-api-7513 deletion completed in 6.16581547s • [SLOW TEST:18.509 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 9 13:08:43.032: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Feb 9 13:08:53.668: INFO: Successfully updated pod "pod-update-689164fc-9890-4b34-b6c2-b9c012aa006b" STEP: verifying the updated pod is in kubernetes Feb 9 13:08:53.728: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 9 13:08:53.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7150" for this suite. Feb 9 13:09:15.756: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 13:09:15.919: INFO: namespace pods-7150 deletion completed in 22.186046422s • [SLOW TEST:32.887 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 9 13:09:15.920: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-cm9g STEP: Creating a pod to test atomic-volume-subpath Feb 9 13:09:16.048: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-cm9g" in namespace "subpath-8131" to be "success or failure" Feb 9 13:09:16.079: INFO: Pod "pod-subpath-test-configmap-cm9g": Phase="Pending", Reason="", readiness=false. Elapsed: 30.906486ms Feb 9 13:09:18.090: INFO: Pod "pod-subpath-test-configmap-cm9g": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041336317s Feb 9 13:09:20.197: INFO: Pod "pod-subpath-test-configmap-cm9g": Phase="Pending", Reason="", readiness=false. Elapsed: 4.148069753s Feb 9 13:09:22.205: INFO: Pod "pod-subpath-test-configmap-cm9g": Phase="Pending", Reason="", readiness=false. Elapsed: 6.156767394s Feb 9 13:09:24.211: INFO: Pod "pod-subpath-test-configmap-cm9g": Phase="Pending", Reason="", readiness=false. Elapsed: 8.162388256s Feb 9 13:09:26.221: INFO: Pod "pod-subpath-test-configmap-cm9g": Phase="Running", Reason="", readiness=true. Elapsed: 10.172316799s Feb 9 13:09:28.228: INFO: Pod "pod-subpath-test-configmap-cm9g": Phase="Running", Reason="", readiness=true. Elapsed: 12.179708931s Feb 9 13:09:30.237: INFO: Pod "pod-subpath-test-configmap-cm9g": Phase="Running", Reason="", readiness=true. Elapsed: 14.188904937s Feb 9 13:09:32.249: INFO: Pod "pod-subpath-test-configmap-cm9g": Phase="Running", Reason="", readiness=true. Elapsed: 16.200030174s Feb 9 13:09:34.257: INFO: Pod "pod-subpath-test-configmap-cm9g": Phase="Running", Reason="", readiness=true. Elapsed: 18.208485769s Feb 9 13:09:36.265: INFO: Pod "pod-subpath-test-configmap-cm9g": Phase="Running", Reason="", readiness=true. Elapsed: 20.216481904s Feb 9 13:09:38.275: INFO: Pod "pod-subpath-test-configmap-cm9g": Phase="Running", Reason="", readiness=true. Elapsed: 22.226138082s Feb 9 13:09:40.283: INFO: Pod "pod-subpath-test-configmap-cm9g": Phase="Running", Reason="", readiness=true. Elapsed: 24.234846415s Feb 9 13:09:42.292: INFO: Pod "pod-subpath-test-configmap-cm9g": Phase="Running", Reason="", readiness=true. Elapsed: 26.243243802s Feb 9 13:09:44.302: INFO: Pod "pod-subpath-test-configmap-cm9g": Phase="Running", Reason="", readiness=true. Elapsed: 28.253280041s Feb 9 13:09:46.318: INFO: Pod "pod-subpath-test-configmap-cm9g": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.269385479s STEP: Saw pod success Feb 9 13:09:46.318: INFO: Pod "pod-subpath-test-configmap-cm9g" satisfied condition "success or failure" Feb 9 13:09:46.325: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-cm9g container test-container-subpath-configmap-cm9g: STEP: delete the pod Feb 9 13:09:46.401: INFO: Waiting for pod pod-subpath-test-configmap-cm9g to disappear Feb 9 13:09:46.408: INFO: Pod pod-subpath-test-configmap-cm9g no longer exists STEP: Deleting pod pod-subpath-test-configmap-cm9g Feb 9 13:09:46.408: INFO: Deleting pod "pod-subpath-test-configmap-cm9g" in namespace "subpath-8131" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 9 13:09:46.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8131" for this suite. Feb 9 13:09:52.432: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 9 13:09:52.543: INFO: namespace subpath-8131 deletion completed in 6.127365643s • [SLOW TEST:36.623 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 9 13:09:52.543: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 9 13:09:52.663: INFO: (0) /api/v1/nodes/iruya-node:10250/proxy/logs/:
alternatives.log
alternatives.l... (200; 20.818054ms)
Feb  9 13:09:52.669: INFO: (1) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.979987ms)
Feb  9 13:09:52.675: INFO: (2) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.114438ms)
Feb  9 13:09:52.680: INFO: (3) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.510963ms)
Feb  9 13:09:52.687: INFO: (4) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.222052ms)
Feb  9 13:09:52.697: INFO: (5) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.00339ms)
Feb  9 13:09:52.737: INFO: (6) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 39.84209ms)
Feb  9 13:09:52.743: INFO: (7) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.302724ms)
Feb  9 13:09:52.750: INFO: (8) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.367134ms)
Feb  9 13:09:52.756: INFO: (9) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.27082ms)
Feb  9 13:09:52.763: INFO: (10) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.253865ms)
Feb  9 13:09:52.768: INFO: (11) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.166428ms)
Feb  9 13:09:52.776: INFO: (12) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.772295ms)
Feb  9 13:09:52.782: INFO: (13) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.713185ms)
Feb  9 13:09:52.788: INFO: (14) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.053481ms)
Feb  9 13:09:52.794: INFO: (15) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.549309ms)
Feb  9 13:09:52.799: INFO: (16) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.26517ms)
Feb  9 13:09:52.805: INFO: (17) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.551863ms)
Feb  9 13:09:52.813: INFO: (18) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.105691ms)
Feb  9 13:09:52.821: INFO: (19) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.9219ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 13:09:52.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-1192" for this suite.
Feb  9 13:09:58.888: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 13:09:58.997: INFO: namespace proxy-1192 deletion completed in 6.170118028s

• [SLOW TEST:6.454 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 13:09:58.998: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod busybox-1660b7fc-27c2-4d87-bfaa-73b2d6b712ef in namespace container-probe-9143
Feb  9 13:10:07.164: INFO: Started pod busybox-1660b7fc-27c2-4d87-bfaa-73b2d6b712ef in namespace container-probe-9143
STEP: checking the pod's current state and verifying that restartCount is present
Feb  9 13:10:07.170: INFO: Initial restart count of pod busybox-1660b7fc-27c2-4d87-bfaa-73b2d6b712ef is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 13:14:08.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-9143" for this suite.
Feb  9 13:14:15.063: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 13:14:15.173: INFO: namespace container-probe-9143 deletion completed in 6.181609325s

• [SLOW TEST:256.175 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 13:14:15.174: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 13:14:15.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-6097" for this suite.
Feb  9 13:14:21.272: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 13:14:21.359: INFO: namespace services-6097 deletion completed in 6.108179651s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:6.185 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 13:14:21.360: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating replication controller svc-latency-rc in namespace svc-latency-925
I0209 13:14:21.527813       8 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-925, replica count: 1
I0209 13:14:22.579007       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0209 13:14:23.579555       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0209 13:14:24.580254       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0209 13:14:25.580961       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0209 13:14:26.581848       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0209 13:14:27.582394       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0209 13:14:28.582916       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0209 13:14:29.583366       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0209 13:14:30.583897       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb  9 13:14:30.745: INFO: Created: latency-svc-fpz28
Feb  9 13:14:30.848: INFO: Got endpoints: latency-svc-fpz28 [163.506271ms]
Feb  9 13:14:30.920: INFO: Created: latency-svc-qbjjb
Feb  9 13:14:30.931: INFO: Got endpoints: latency-svc-qbjjb [81.706535ms]
Feb  9 13:14:31.078: INFO: Created: latency-svc-5slz2
Feb  9 13:14:31.111: INFO: Got endpoints: latency-svc-5slz2 [262.903995ms]
Feb  9 13:14:31.116: INFO: Created: latency-svc-vsk6r
Feb  9 13:14:31.120: INFO: Got endpoints: latency-svc-vsk6r [269.11167ms]
Feb  9 13:14:31.157: INFO: Created: latency-svc-mwx6m
Feb  9 13:14:31.158: INFO: Got endpoints: latency-svc-mwx6m [309.309211ms]
Feb  9 13:14:31.275: INFO: Created: latency-svc-b5wp2
Feb  9 13:14:31.304: INFO: Got endpoints: latency-svc-b5wp2 [454.542042ms]
Feb  9 13:14:31.311: INFO: Created: latency-svc-bpc5q
Feb  9 13:14:31.320: INFO: Got endpoints: latency-svc-bpc5q [470.157384ms]
Feb  9 13:14:31.372: INFO: Created: latency-svc-dqjpx
Feb  9 13:14:31.456: INFO: Got endpoints: latency-svc-dqjpx [606.259999ms]
Feb  9 13:14:31.468: INFO: Created: latency-svc-28jlq
Feb  9 13:14:31.486: INFO: Got endpoints: latency-svc-28jlq [635.692661ms]
Feb  9 13:14:31.841: INFO: Created: latency-svc-wr8x9
Feb  9 13:14:31.860: INFO: Got endpoints: latency-svc-wr8x9 [1.009602463s]
Feb  9 13:14:32.044: INFO: Created: latency-svc-vktmg
Feb  9 13:14:32.116: INFO: Got endpoints: latency-svc-vktmg [1.265442124s]
Feb  9 13:14:32.124: INFO: Created: latency-svc-lwb8v
Feb  9 13:14:32.253: INFO: Got endpoints: latency-svc-lwb8v [1.403264055s]
Feb  9 13:14:32.363: INFO: Created: latency-svc-vsl76
Feb  9 13:14:32.462: INFO: Got endpoints: latency-svc-vsl76 [1.611196691s]
Feb  9 13:14:32.474: INFO: Created: latency-svc-zgjht
Feb  9 13:14:32.489: INFO: Got endpoints: latency-svc-zgjht [1.638241076s]
Feb  9 13:14:32.544: INFO: Created: latency-svc-qfvmk
Feb  9 13:14:32.555: INFO: Got endpoints: latency-svc-qfvmk [1.705136852s]
Feb  9 13:14:32.674: INFO: Created: latency-svc-9ldgf
Feb  9 13:14:32.680: INFO: Got endpoints: latency-svc-9ldgf [1.829359608s]
Feb  9 13:14:32.734: INFO: Created: latency-svc-5s7wj
Feb  9 13:14:32.734: INFO: Got endpoints: latency-svc-5s7wj [1.80263556s]
Feb  9 13:14:32.870: INFO: Created: latency-svc-6dwl6
Feb  9 13:14:32.896: INFO: Got endpoints: latency-svc-6dwl6 [1.784532801s]
Feb  9 13:14:32.904: INFO: Created: latency-svc-qtmnq
Feb  9 13:14:32.923: INFO: Got endpoints: latency-svc-qtmnq [1.803077427s]
Feb  9 13:14:33.095: INFO: Created: latency-svc-psj99
Feb  9 13:14:33.111: INFO: Got endpoints: latency-svc-psj99 [1.952668273s]
Feb  9 13:14:33.192: INFO: Created: latency-svc-mrdqq
Feb  9 13:14:33.291: INFO: Got endpoints: latency-svc-mrdqq [1.987110753s]
Feb  9 13:14:33.340: INFO: Created: latency-svc-zvqql
Feb  9 13:14:33.376: INFO: Created: latency-svc-ch52z
Feb  9 13:14:33.377: INFO: Got endpoints: latency-svc-zvqql [2.057154467s]
Feb  9 13:14:33.451: INFO: Got endpoints: latency-svc-ch52z [1.9949491s]
Feb  9 13:14:33.485: INFO: Created: latency-svc-vxflv
Feb  9 13:14:33.535: INFO: Got endpoints: latency-svc-vxflv [2.049044132s]
Feb  9 13:14:33.536: INFO: Created: latency-svc-7djbp
Feb  9 13:14:33.670: INFO: Got endpoints: latency-svc-7djbp [1.8092872s]
Feb  9 13:14:33.690: INFO: Created: latency-svc-zj2rp
Feb  9 13:14:33.695: INFO: Got endpoints: latency-svc-zj2rp [1.578058564s]
Feb  9 13:14:33.750: INFO: Created: latency-svc-dwtp6
Feb  9 13:14:33.760: INFO: Got endpoints: latency-svc-dwtp6 [1.50629474s]
Feb  9 13:14:33.934: INFO: Created: latency-svc-q8v9q
Feb  9 13:14:33.948: INFO: Got endpoints: latency-svc-q8v9q [1.485595724s]
Feb  9 13:14:34.181: INFO: Created: latency-svc-ttdl5
Feb  9 13:14:34.188: INFO: Got endpoints: latency-svc-ttdl5 [1.699168949s]
Feb  9 13:14:34.244: INFO: Created: latency-svc-gb49v
Feb  9 13:14:34.356: INFO: Got endpoints: latency-svc-gb49v [1.800331858s]
Feb  9 13:14:34.390: INFO: Created: latency-svc-9wm7d
Feb  9 13:14:34.397: INFO: Got endpoints: latency-svc-9wm7d [1.716998472s]
Feb  9 13:14:34.439: INFO: Created: latency-svc-86jkm
Feb  9 13:14:34.445: INFO: Got endpoints: latency-svc-86jkm [1.711199587s]
Feb  9 13:14:34.613: INFO: Created: latency-svc-vlg47
Feb  9 13:14:34.633: INFO: Got endpoints: latency-svc-vlg47 [1.736331832s]
Feb  9 13:14:34.640: INFO: Created: latency-svc-vmp8z
Feb  9 13:14:34.640: INFO: Got endpoints: latency-svc-vmp8z [1.717081375s]
Feb  9 13:14:34.699: INFO: Created: latency-svc-6pvp7
Feb  9 13:14:34.802: INFO: Got endpoints: latency-svc-6pvp7 [1.690711958s]
Feb  9 13:14:34.887: INFO: Created: latency-svc-g5zqq
Feb  9 13:14:34.888: INFO: Got endpoints: latency-svc-g5zqq [1.595247171s]
Feb  9 13:14:35.009: INFO: Created: latency-svc-5pwg6
Feb  9 13:14:35.021: INFO: Got endpoints: latency-svc-5pwg6 [1.644112615s]
Feb  9 13:14:35.080: INFO: Created: latency-svc-kdk76
Feb  9 13:14:35.149: INFO: Got endpoints: latency-svc-kdk76 [1.696883818s]
Feb  9 13:14:35.193: INFO: Created: latency-svc-4csx8
Feb  9 13:14:35.199: INFO: Got endpoints: latency-svc-4csx8 [1.66273952s]
Feb  9 13:14:35.226: INFO: Created: latency-svc-4zt9s
Feb  9 13:14:35.242: INFO: Got endpoints: latency-svc-4zt9s [1.57145699s]
Feb  9 13:14:35.324: INFO: Created: latency-svc-4bh4d
Feb  9 13:14:35.335: INFO: Got endpoints: latency-svc-4bh4d [1.640450514s]
Feb  9 13:14:35.371: INFO: Created: latency-svc-ccrdj
Feb  9 13:14:35.378: INFO: Got endpoints: latency-svc-ccrdj [1.617549945s]
Feb  9 13:14:35.427: INFO: Created: latency-svc-5b9vr
Feb  9 13:14:35.427: INFO: Got endpoints: latency-svc-5b9vr [1.478960153s]
Feb  9 13:14:35.482: INFO: Created: latency-svc-8mjkx
Feb  9 13:14:35.491: INFO: Got endpoints: latency-svc-8mjkx [1.302442892s]
Feb  9 13:14:35.532: INFO: Created: latency-svc-2bs6l
Feb  9 13:14:35.534: INFO: Got endpoints: latency-svc-2bs6l [1.178038032s]
Feb  9 13:14:35.626: INFO: Created: latency-svc-sbf7l
Feb  9 13:14:35.665: INFO: Got endpoints: latency-svc-sbf7l [1.268269079s]
Feb  9 13:14:35.697: INFO: Created: latency-svc-8k9zq
Feb  9 13:14:35.697: INFO: Got endpoints: latency-svc-8k9zq [1.251527306s]
Feb  9 13:14:35.721: INFO: Created: latency-svc-gkjzf
Feb  9 13:14:35.767: INFO: Got endpoints: latency-svc-gkjzf [1.133898354s]
Feb  9 13:14:35.829: INFO: Created: latency-svc-f6xrz
Feb  9 13:14:35.851: INFO: Got endpoints: latency-svc-f6xrz [1.210900351s]
Feb  9 13:14:36.001: INFO: Created: latency-svc-99mzr
Feb  9 13:14:36.006: INFO: Got endpoints: latency-svc-99mzr [1.203923237s]
Feb  9 13:14:36.176: INFO: Created: latency-svc-qshrn
Feb  9 13:14:36.191: INFO: Got endpoints: latency-svc-qshrn [1.303436276s]
Feb  9 13:14:36.222: INFO: Created: latency-svc-9pqxr
Feb  9 13:14:36.230: INFO: Got endpoints: latency-svc-9pqxr [1.208711351s]
Feb  9 13:14:36.345: INFO: Created: latency-svc-ghv8h
Feb  9 13:14:36.352: INFO: Got endpoints: latency-svc-ghv8h [1.203351032s]
Feb  9 13:14:36.396: INFO: Created: latency-svc-m7qds
Feb  9 13:14:36.405: INFO: Got endpoints: latency-svc-m7qds [1.205424783s]
Feb  9 13:14:36.444: INFO: Created: latency-svc-bdmst
Feb  9 13:14:36.505: INFO: Got endpoints: latency-svc-bdmst [1.262419847s]
Feb  9 13:14:36.611: INFO: Created: latency-svc-fx7jz
Feb  9 13:14:36.681: INFO: Got endpoints: latency-svc-fx7jz [1.345013356s]
Feb  9 13:14:36.711: INFO: Created: latency-svc-blv2x
Feb  9 13:14:36.754: INFO: Got endpoints: latency-svc-blv2x [1.376310848s]
Feb  9 13:14:36.756: INFO: Created: latency-svc-xvgsp
Feb  9 13:14:36.764: INFO: Got endpoints: latency-svc-xvgsp [1.33636976s]
Feb  9 13:14:36.894: INFO: Created: latency-svc-2hdqv
Feb  9 13:14:36.919: INFO: Got endpoints: latency-svc-2hdqv [1.428144755s]
Feb  9 13:14:36.989: INFO: Created: latency-svc-7dwh8
Feb  9 13:14:37.143: INFO: Got endpoints: latency-svc-7dwh8 [1.609354188s]
Feb  9 13:14:37.197: INFO: Created: latency-svc-27bw2
Feb  9 13:14:37.338: INFO: Got endpoints: latency-svc-27bw2 [1.672643043s]
Feb  9 13:14:37.347: INFO: Created: latency-svc-q928z
Feb  9 13:14:37.354: INFO: Got endpoints: latency-svc-q928z [1.65695003s]
Feb  9 13:14:37.396: INFO: Created: latency-svc-bw9t9
Feb  9 13:14:37.458: INFO: Got endpoints: latency-svc-bw9t9 [1.689753821s]
Feb  9 13:14:37.469: INFO: Created: latency-svc-l2h9c
Feb  9 13:14:37.473: INFO: Got endpoints: latency-svc-l2h9c [1.621708786s]
Feb  9 13:14:37.534: INFO: Created: latency-svc-6dj47
Feb  9 13:14:37.553: INFO: Got endpoints: latency-svc-6dj47 [1.547253485s]
Feb  9 13:14:37.674: INFO: Created: latency-svc-4xjxj
Feb  9 13:14:37.682: INFO: Got endpoints: latency-svc-4xjxj [1.490067025s]
Feb  9 13:14:37.712: INFO: Created: latency-svc-w69r9
Feb  9 13:14:37.724: INFO: Got endpoints: latency-svc-w69r9 [1.493812366s]
Feb  9 13:14:37.771: INFO: Created: latency-svc-6kvg5
Feb  9 13:14:37.827: INFO: Got endpoints: latency-svc-6kvg5 [1.473775808s]
Feb  9 13:14:37.869: INFO: Created: latency-svc-s6bbl
Feb  9 13:14:37.888: INFO: Got endpoints: latency-svc-s6bbl [1.483670812s]
Feb  9 13:14:37.904: INFO: Created: latency-svc-lqgtr
Feb  9 13:14:37.904: INFO: Got endpoints: latency-svc-lqgtr [1.399326013s]
Feb  9 13:14:37.975: INFO: Created: latency-svc-zbsvz
Feb  9 13:14:38.057: INFO: Got endpoints: latency-svc-zbsvz [1.375718184s]
Feb  9 13:14:38.061: INFO: Created: latency-svc-khqz8
Feb  9 13:14:38.133: INFO: Got endpoints: latency-svc-khqz8 [1.379126555s]
Feb  9 13:14:38.147: INFO: Created: latency-svc-rknzk
Feb  9 13:14:38.186: INFO: Created: latency-svc-l54bf
Feb  9 13:14:38.187: INFO: Got endpoints: latency-svc-rknzk [1.423339866s]
Feb  9 13:14:38.214: INFO: Got endpoints: latency-svc-l54bf [1.29474888s]
Feb  9 13:14:38.223: INFO: Created: latency-svc-24mck
Feb  9 13:14:38.224: INFO: Got endpoints: latency-svc-24mck [1.080603539s]
Feb  9 13:14:38.297: INFO: Created: latency-svc-cz2vw
Feb  9 13:14:38.306: INFO: Got endpoints: latency-svc-cz2vw [966.962648ms]
Feb  9 13:14:38.346: INFO: Created: latency-svc-gnsfn
Feb  9 13:14:38.350: INFO: Got endpoints: latency-svc-gnsfn [996.047835ms]
Feb  9 13:14:38.442: INFO: Created: latency-svc-vjgzf
Feb  9 13:14:38.463: INFO: Got endpoints: latency-svc-vjgzf [1.005239768s]
Feb  9 13:14:38.514: INFO: Created: latency-svc-kcbh2
Feb  9 13:14:38.551: INFO: Got endpoints: latency-svc-kcbh2 [1.077621517s]
Feb  9 13:14:38.632: INFO: Created: latency-svc-44lr4
Feb  9 13:14:38.703: INFO: Got endpoints: latency-svc-44lr4 [1.149218269s]
Feb  9 13:14:38.758: INFO: Created: latency-svc-tmrp7
Feb  9 13:14:38.773: INFO: Got endpoints: latency-svc-tmrp7 [1.090885738s]
Feb  9 13:14:38.832: INFO: Created: latency-svc-l52xm
Feb  9 13:14:38.851: INFO: Got endpoints: latency-svc-l52xm [1.126349729s]
Feb  9 13:14:38.934: INFO: Created: latency-svc-vlmrz
Feb  9 13:14:38.940: INFO: Got endpoints: latency-svc-vlmrz [1.112943941s]
Feb  9 13:14:38.994: INFO: Created: latency-svc-fh87f
Feb  9 13:14:39.018: INFO: Got endpoints: latency-svc-fh87f [1.128986384s]
Feb  9 13:14:39.157: INFO: Created: latency-svc-nwczq
Feb  9 13:14:39.158: INFO: Got endpoints: latency-svc-nwczq [1.253443748s]
Feb  9 13:14:39.244: INFO: Created: latency-svc-42vks
Feb  9 13:14:39.284: INFO: Got endpoints: latency-svc-42vks [1.227317141s]
Feb  9 13:14:39.319: INFO: Created: latency-svc-wdngh
Feb  9 13:14:39.328: INFO: Got endpoints: latency-svc-wdngh [1.193776587s]
Feb  9 13:14:39.360: INFO: Created: latency-svc-vpvpf
Feb  9 13:14:39.375: INFO: Got endpoints: latency-svc-vpvpf [1.187666334s]
Feb  9 13:14:39.484: INFO: Created: latency-svc-9bzd6
Feb  9 13:14:39.499: INFO: Got endpoints: latency-svc-9bzd6 [1.285067399s]
Feb  9 13:14:39.530: INFO: Created: latency-svc-8hswj
Feb  9 13:14:39.707: INFO: Got endpoints: latency-svc-8hswj [1.482255726s]
Feb  9 13:14:39.708: INFO: Created: latency-svc-pkjzg
Feb  9 13:14:39.719: INFO: Got endpoints: latency-svc-pkjzg [1.412475508s]
Feb  9 13:14:39.784: INFO: Created: latency-svc-n59nm
Feb  9 13:14:39.784: INFO: Got endpoints: latency-svc-n59nm [1.433935137s]
Feb  9 13:14:39.888: INFO: Created: latency-svc-vjss4
Feb  9 13:14:39.894: INFO: Got endpoints: latency-svc-vjss4 [1.429971914s]
Feb  9 13:14:39.933: INFO: Created: latency-svc-sfdh6
Feb  9 13:14:39.945: INFO: Got endpoints: latency-svc-sfdh6 [1.393699514s]
Feb  9 13:14:40.073: INFO: Created: latency-svc-zgw57
Feb  9 13:14:40.086: INFO: Got endpoints: latency-svc-zgw57 [1.383477925s]
Feb  9 13:14:40.139: INFO: Created: latency-svc-b6ftz
Feb  9 13:14:40.162: INFO: Got endpoints: latency-svc-b6ftz [1.389176358s]
Feb  9 13:14:40.339: INFO: Created: latency-svc-qs2bb
Feb  9 13:14:40.340: INFO: Got endpoints: latency-svc-qs2bb [1.488298943s]
Feb  9 13:14:40.495: INFO: Created: latency-svc-wbdgb
Feb  9 13:14:40.511: INFO: Got endpoints: latency-svc-wbdgb [1.570669384s]
Feb  9 13:14:40.594: INFO: Created: latency-svc-pjqdk
Feb  9 13:14:40.594: INFO: Got endpoints: latency-svc-pjqdk [1.576689304s]
Feb  9 13:14:40.678: INFO: Created: latency-svc-kwh2d
Feb  9 13:14:40.703: INFO: Got endpoints: latency-svc-kwh2d [1.544945827s]
Feb  9 13:14:40.746: INFO: Created: latency-svc-cmlsj
Feb  9 13:14:40.749: INFO: Got endpoints: latency-svc-cmlsj [1.464002913s]
Feb  9 13:14:40.908: INFO: Created: latency-svc-lnfsf
Feb  9 13:14:40.963: INFO: Got endpoints: latency-svc-lnfsf [259.340413ms]
Feb  9 13:14:40.998: INFO: Created: latency-svc-79gp5
Feb  9 13:14:41.096: INFO: Got endpoints: latency-svc-79gp5 [1.768019088s]
Feb  9 13:14:41.138: INFO: Created: latency-svc-hbbxw
Feb  9 13:14:41.153: INFO: Got endpoints: latency-svc-hbbxw [1.777887505s]
Feb  9 13:14:41.314: INFO: Created: latency-svc-9kqf8
Feb  9 13:14:41.327: INFO: Got endpoints: latency-svc-9kqf8 [1.827367183s]
Feb  9 13:14:41.397: INFO: Created: latency-svc-8vtjw
Feb  9 13:14:41.410: INFO: Got endpoints: latency-svc-8vtjw [1.703462825s]
Feb  9 13:14:41.513: INFO: Created: latency-svc-4wht6
Feb  9 13:14:41.524: INFO: Got endpoints: latency-svc-4wht6 [1.805320112s]
Feb  9 13:14:41.561: INFO: Created: latency-svc-jjf94
Feb  9 13:14:41.578: INFO: Got endpoints: latency-svc-jjf94 [1.793166367s]
Feb  9 13:14:41.707: INFO: Created: latency-svc-dnrnl
Feb  9 13:14:41.732: INFO: Got endpoints: latency-svc-dnrnl [1.837907219s]
Feb  9 13:14:41.788: INFO: Created: latency-svc-95mf2
Feb  9 13:14:41.890: INFO: Got endpoints: latency-svc-95mf2 [1.944612692s]
Feb  9 13:14:41.905: INFO: Created: latency-svc-bdg6s
Feb  9 13:14:41.913: INFO: Got endpoints: latency-svc-bdg6s [1.826802112s]
Feb  9 13:14:41.973: INFO: Created: latency-svc-gzvdt
Feb  9 13:14:41.976: INFO: Got endpoints: latency-svc-gzvdt [1.813324143s]
Feb  9 13:14:42.094: INFO: Created: latency-svc-tf269
Feb  9 13:14:42.100: INFO: Got endpoints: latency-svc-tf269 [1.760767841s]
Feb  9 13:14:42.156: INFO: Created: latency-svc-nm5fj
Feb  9 13:14:42.340: INFO: Got endpoints: latency-svc-nm5fj [1.828356151s]
Feb  9 13:14:42.364: INFO: Created: latency-svc-xq4pt
Feb  9 13:14:42.425: INFO: Created: latency-svc-sfgfv
Feb  9 13:14:42.430: INFO: Got endpoints: latency-svc-xq4pt [1.834955795s]
Feb  9 13:14:42.508: INFO: Got endpoints: latency-svc-sfgfv [1.759168888s]
Feb  9 13:14:42.559: INFO: Created: latency-svc-twtwv
Feb  9 13:14:42.560: INFO: Got endpoints: latency-svc-twtwv [1.59687335s]
Feb  9 13:14:42.740: INFO: Created: latency-svc-rpdd4
Feb  9 13:14:42.746: INFO: Got endpoints: latency-svc-rpdd4 [1.649695345s]
Feb  9 13:14:42.997: INFO: Created: latency-svc-6p8vr
Feb  9 13:14:43.000: INFO: Got endpoints: latency-svc-6p8vr [1.846758435s]
Feb  9 13:14:43.082: INFO: Created: latency-svc-wj4t8
Feb  9 13:14:43.174: INFO: Got endpoints: latency-svc-wj4t8 [1.847042494s]
Feb  9 13:14:43.185: INFO: Created: latency-svc-wprrd
Feb  9 13:14:43.190: INFO: Got endpoints: latency-svc-wprrd [1.779325846s]
Feb  9 13:14:43.240: INFO: Created: latency-svc-ntmkq
Feb  9 13:14:43.435: INFO: Created: latency-svc-hc9gx
Feb  9 13:14:43.435: INFO: Got endpoints: latency-svc-ntmkq [1.911027822s]
Feb  9 13:14:43.452: INFO: Got endpoints: latency-svc-hc9gx [1.873849235s]
Feb  9 13:14:43.512: INFO: Created: latency-svc-mdkkf
Feb  9 13:14:43.521: INFO: Got endpoints: latency-svc-mdkkf [1.788862051s]
Feb  9 13:14:43.649: INFO: Created: latency-svc-mwgqq
Feb  9 13:14:43.665: INFO: Got endpoints: latency-svc-mwgqq [1.774064796s]
Feb  9 13:14:43.705: INFO: Created: latency-svc-bn8jm
Feb  9 13:14:43.711: INFO: Got endpoints: latency-svc-bn8jm [1.797485775s]
Feb  9 13:14:43.886: INFO: Created: latency-svc-t5f9v
Feb  9 13:14:43.908: INFO: Got endpoints: latency-svc-t5f9v [1.931913302s]
Feb  9 13:14:43.953: INFO: Created: latency-svc-56k9q
Feb  9 13:14:43.972: INFO: Got endpoints: latency-svc-56k9q [1.871710416s]
Feb  9 13:14:44.125: INFO: Created: latency-svc-dtntq
Feb  9 13:14:44.138: INFO: Got endpoints: latency-svc-dtntq [1.79835771s]
Feb  9 13:14:44.176: INFO: Created: latency-svc-gs7ml
Feb  9 13:14:44.177: INFO: Got endpoints: latency-svc-gs7ml [1.747484006s]
Feb  9 13:14:44.293: INFO: Created: latency-svc-k977n
Feb  9 13:14:44.296: INFO: Got endpoints: latency-svc-k977n [1.788365795s]
Feb  9 13:14:44.337: INFO: Created: latency-svc-xxz6v
Feb  9 13:14:44.345: INFO: Got endpoints: latency-svc-xxz6v [1.784948401s]
Feb  9 13:14:44.487: INFO: Created: latency-svc-d6rfv
Feb  9 13:14:44.504: INFO: Got endpoints: latency-svc-d6rfv [1.75754337s]
Feb  9 13:14:44.610: INFO: Created: latency-svc-d4qd6
Feb  9 13:14:44.742: INFO: Got endpoints: latency-svc-d4qd6 [1.741500213s]
Feb  9 13:14:44.770: INFO: Created: latency-svc-n6kz9
Feb  9 13:14:44.813: INFO: Got endpoints: latency-svc-n6kz9 [1.638277846s]
Feb  9 13:14:45.051: INFO: Created: latency-svc-lsj5p
Feb  9 13:14:45.058: INFO: Got endpoints: latency-svc-lsj5p [1.868196163s]
Feb  9 13:14:45.106: INFO: Created: latency-svc-pvjf5
Feb  9 13:14:45.116: INFO: Got endpoints: latency-svc-pvjf5 [1.680257573s]
Feb  9 13:14:45.267: INFO: Created: latency-svc-km6sb
Feb  9 13:14:45.267: INFO: Got endpoints: latency-svc-km6sb [1.815383472s]
Feb  9 13:14:45.312: INFO: Created: latency-svc-95m8p
Feb  9 13:14:45.321: INFO: Got endpoints: latency-svc-95m8p [1.799549092s]
Feb  9 13:14:45.482: INFO: Created: latency-svc-2rmbm
Feb  9 13:14:45.491: INFO: Got endpoints: latency-svc-2rmbm [1.826166811s]
Feb  9 13:14:45.551: INFO: Created: latency-svc-c27fr
Feb  9 13:14:45.566: INFO: Got endpoints: latency-svc-c27fr [1.854666272s]
Feb  9 13:14:45.765: INFO: Created: latency-svc-gmpsx
Feb  9 13:14:45.770: INFO: Got endpoints: latency-svc-gmpsx [1.861391723s]
Feb  9 13:14:45.845: INFO: Created: latency-svc-g696f
Feb  9 13:14:46.200: INFO: Got endpoints: latency-svc-g696f [2.227375463s]
Feb  9 13:14:46.212: INFO: Created: latency-svc-4tk6c
Feb  9 13:14:46.230: INFO: Got endpoints: latency-svc-4tk6c [2.091391876s]
Feb  9 13:14:46.272: INFO: Created: latency-svc-4wwxr
Feb  9 13:14:46.282: INFO: Got endpoints: latency-svc-4wwxr [2.104342048s]
Feb  9 13:14:46.563: INFO: Created: latency-svc-vq4hd
Feb  9 13:14:46.575: INFO: Got endpoints: latency-svc-vq4hd [2.278394596s]
Feb  9 13:14:46.668: INFO: Created: latency-svc-rvrmb
Feb  9 13:14:46.753: INFO: Got endpoints: latency-svc-rvrmb [2.40716201s]
Feb  9 13:14:46.779: INFO: Created: latency-svc-trrqw
Feb  9 13:14:46.798: INFO: Got endpoints: latency-svc-trrqw [2.293976445s]
Feb  9 13:14:46.835: INFO: Created: latency-svc-8ttx9
Feb  9 13:14:46.844: INFO: Got endpoints: latency-svc-8ttx9 [2.101596198s]
Feb  9 13:14:46.953: INFO: Created: latency-svc-g2jkc
Feb  9 13:14:46.970: INFO: Got endpoints: latency-svc-g2jkc [2.15672904s]
Feb  9 13:14:47.064: INFO: Created: latency-svc-htw72
Feb  9 13:14:47.185: INFO: Got endpoints: latency-svc-htw72 [2.12682987s]
Feb  9 13:14:47.221: INFO: Created: latency-svc-r6q6m
Feb  9 13:14:47.244: INFO: Got endpoints: latency-svc-r6q6m [2.126986868s]
Feb  9 13:14:47.388: INFO: Created: latency-svc-ngjdp
Feb  9 13:14:47.395: INFO: Got endpoints: latency-svc-ngjdp [2.127375453s]
Feb  9 13:14:47.455: INFO: Created: latency-svc-zl2xj
Feb  9 13:14:47.469: INFO: Got endpoints: latency-svc-zl2xj [2.148152729s]
Feb  9 13:14:47.571: INFO: Created: latency-svc-ld9h8
Feb  9 13:14:47.579: INFO: Got endpoints: latency-svc-ld9h8 [2.087605723s]
Feb  9 13:14:47.622: INFO: Created: latency-svc-ppjws
Feb  9 13:14:47.632: INFO: Got endpoints: latency-svc-ppjws [2.066034855s]
Feb  9 13:14:47.740: INFO: Created: latency-svc-82ljm
Feb  9 13:14:47.774: INFO: Got endpoints: latency-svc-82ljm [2.003542477s]
Feb  9 13:14:47.781: INFO: Created: latency-svc-5fg24
Feb  9 13:14:47.788: INFO: Got endpoints: latency-svc-5fg24 [1.58687122s]
Feb  9 13:14:47.936: INFO: Created: latency-svc-88jqh
Feb  9 13:14:47.995: INFO: Got endpoints: latency-svc-88jqh [1.76412992s]
Feb  9 13:14:48.002: INFO: Created: latency-svc-gx2d4
Feb  9 13:14:48.110: INFO: Got endpoints: latency-svc-gx2d4 [1.827690829s]
Feb  9 13:14:48.141: INFO: Created: latency-svc-lbk4z
Feb  9 13:14:48.173: INFO: Got endpoints: latency-svc-lbk4z [1.598124994s]
Feb  9 13:14:48.344: INFO: Created: latency-svc-ndzx2
Feb  9 13:14:48.362: INFO: Got endpoints: latency-svc-ndzx2 [1.609231084s]
Feb  9 13:14:48.405: INFO: Created: latency-svc-98ndf
Feb  9 13:14:48.410: INFO: Got endpoints: latency-svc-98ndf [1.611453289s]
Feb  9 13:14:48.447: INFO: Created: latency-svc-b2wgx
Feb  9 13:14:48.509: INFO: Got endpoints: latency-svc-b2wgx [1.664341289s]
Feb  9 13:14:48.542: INFO: Created: latency-svc-lfsbd
Feb  9 13:14:48.559: INFO: Got endpoints: latency-svc-lfsbd [1.588254388s]
Feb  9 13:14:48.698: INFO: Created: latency-svc-985nl
Feb  9 13:14:48.698: INFO: Got endpoints: latency-svc-985nl [1.512688244s]
Feb  9 13:14:48.726: INFO: Created: latency-svc-zjvmt
Feb  9 13:14:48.892: INFO: Got endpoints: latency-svc-zjvmt [1.647871281s]
Feb  9 13:14:48.973: INFO: Created: latency-svc-dzqp9
Feb  9 13:14:49.231: INFO: Got endpoints: latency-svc-dzqp9 [1.83601643s]
Feb  9 13:14:49.315: INFO: Created: latency-svc-ksqmq
Feb  9 13:14:49.316: INFO: Got endpoints: latency-svc-ksqmq [1.846818313s]
Feb  9 13:14:49.452: INFO: Created: latency-svc-js7xg
Feb  9 13:14:49.461: INFO: Got endpoints: latency-svc-js7xg [1.881966877s]
Feb  9 13:14:49.503: INFO: Created: latency-svc-skdqt
Feb  9 13:14:49.576: INFO: Got endpoints: latency-svc-skdqt [1.942938679s]
Feb  9 13:14:49.613: INFO: Created: latency-svc-grhfh
Feb  9 13:14:49.637: INFO: Created: latency-svc-4wld4
Feb  9 13:14:49.637: INFO: Got endpoints: latency-svc-grhfh [1.862389187s]
Feb  9 13:14:49.647: INFO: Got endpoints: latency-svc-4wld4 [1.85827811s]
Feb  9 13:14:49.747: INFO: Created: latency-svc-ndh6g
Feb  9 13:14:49.753: INFO: Got endpoints: latency-svc-ndh6g [1.75802511s]
Feb  9 13:14:49.810: INFO: Created: latency-svc-m6c5l
Feb  9 13:14:49.817: INFO: Got endpoints: latency-svc-m6c5l [1.706288791s]
Feb  9 13:14:49.917: INFO: Created: latency-svc-8lsh9
Feb  9 13:14:49.919: INFO: Got endpoints: latency-svc-8lsh9 [1.745506404s]
Feb  9 13:14:49.966: INFO: Created: latency-svc-nck7s
Feb  9 13:14:49.977: INFO: Got endpoints: latency-svc-nck7s [1.614645652s]
Feb  9 13:14:50.152: INFO: Created: latency-svc-bcchj
Feb  9 13:14:50.158: INFO: Got endpoints: latency-svc-bcchj [1.747500312s]
Feb  9 13:14:50.277: INFO: Created: latency-svc-j4vt8
Feb  9 13:14:50.285: INFO: Got endpoints: latency-svc-j4vt8 [1.775358452s]
Feb  9 13:14:50.338: INFO: Created: latency-svc-xkns2
Feb  9 13:14:50.429: INFO: Got endpoints: latency-svc-xkns2 [1.869558568s]
Feb  9 13:14:50.463: INFO: Created: latency-svc-plftk
Feb  9 13:14:50.469: INFO: Got endpoints: latency-svc-plftk [1.77031279s]
Feb  9 13:14:50.515: INFO: Created: latency-svc-6wjxp
Feb  9 13:14:50.608: INFO: Got endpoints: latency-svc-6wjxp [1.715350094s]
Feb  9 13:14:50.636: INFO: Created: latency-svc-jg9xz
Feb  9 13:14:50.640: INFO: Got endpoints: latency-svc-jg9xz [1.408201647s]
Feb  9 13:14:50.705: INFO: Created: latency-svc-v9b5k
Feb  9 13:14:50.785: INFO: Got endpoints: latency-svc-v9b5k [1.468791807s]
Feb  9 13:14:50.827: INFO: Created: latency-svc-nn6gr
Feb  9 13:14:50.831: INFO: Got endpoints: latency-svc-nn6gr [1.369768554s]
Feb  9 13:14:50.878: INFO: Created: latency-svc-2tnkr
Feb  9 13:14:50.947: INFO: Got endpoints: latency-svc-2tnkr [1.370622997s]
Feb  9 13:14:50.986: INFO: Created: latency-svc-mlsx9
Feb  9 13:14:50.997: INFO: Got endpoints: latency-svc-mlsx9 [1.360201956s]
Feb  9 13:14:51.172: INFO: Created: latency-svc-4mzn6
Feb  9 13:14:51.214: INFO: Got endpoints: latency-svc-4mzn6 [1.567664395s]
Feb  9 13:14:51.218: INFO: Created: latency-svc-ss2r6
Feb  9 13:14:51.226: INFO: Got endpoints: latency-svc-ss2r6 [1.473333045s]
Feb  9 13:14:51.389: INFO: Created: latency-svc-4zkz8
Feb  9 13:14:51.573: INFO: Created: latency-svc-ngfwc
Feb  9 13:14:51.582: INFO: Got endpoints: latency-svc-4zkz8 [1.764596784s]
Feb  9 13:14:51.589: INFO: Got endpoints: latency-svc-ngfwc [1.669188772s]
Feb  9 13:14:51.803: INFO: Created: latency-svc-r96hz
Feb  9 13:14:51.843: INFO: Got endpoints: latency-svc-r96hz [1.864689423s]
Feb  9 13:14:51.845: INFO: Created: latency-svc-bhd26
Feb  9 13:14:51.866: INFO: Got endpoints: latency-svc-bhd26 [1.708116608s]
Feb  9 13:14:51.990: INFO: Created: latency-svc-pg9fw
Feb  9 13:14:52.008: INFO: Got endpoints: latency-svc-pg9fw [1.723049431s]
Feb  9 13:14:52.114: INFO: Created: latency-svc-dqf8g
Feb  9 13:14:52.122: INFO: Got endpoints: latency-svc-dqf8g [1.692646727s]
Feb  9 13:14:52.181: INFO: Created: latency-svc-wlnzj
Feb  9 13:14:52.196: INFO: Got endpoints: latency-svc-wlnzj [1.726902704s]
Feb  9 13:14:52.295: INFO: Created: latency-svc-knsr2
Feb  9 13:14:52.306: INFO: Got endpoints: latency-svc-knsr2 [1.698171139s]
Feb  9 13:14:52.352: INFO: Created: latency-svc-tg2mj
Feb  9 13:14:52.356: INFO: Got endpoints: latency-svc-tg2mj [1.71641171s]
Feb  9 13:14:52.472: INFO: Created: latency-svc-bqstm
Feb  9 13:14:52.482: INFO: Got endpoints: latency-svc-bqstm [1.696087622s]
Feb  9 13:14:52.594: INFO: Created: latency-svc-ljzjp
Feb  9 13:14:52.601: INFO: Got endpoints: latency-svc-ljzjp [1.77028533s]
Feb  9 13:14:52.664: INFO: Created: latency-svc-tjhf6
Feb  9 13:14:52.667: INFO: Got endpoints: latency-svc-tjhf6 [1.719736659s]
Feb  9 13:14:52.667: INFO: Latencies: [81.706535ms 259.340413ms 262.903995ms 269.11167ms 309.309211ms 454.542042ms 470.157384ms 606.259999ms 635.692661ms 966.962648ms 996.047835ms 1.005239768s 1.009602463s 1.077621517s 1.080603539s 1.090885738s 1.112943941s 1.126349729s 1.128986384s 1.133898354s 1.149218269s 1.178038032s 1.187666334s 1.193776587s 1.203351032s 1.203923237s 1.205424783s 1.208711351s 1.210900351s 1.227317141s 1.251527306s 1.253443748s 1.262419847s 1.265442124s 1.268269079s 1.285067399s 1.29474888s 1.302442892s 1.303436276s 1.33636976s 1.345013356s 1.360201956s 1.369768554s 1.370622997s 1.375718184s 1.376310848s 1.379126555s 1.383477925s 1.389176358s 1.393699514s 1.399326013s 1.403264055s 1.408201647s 1.412475508s 1.423339866s 1.428144755s 1.429971914s 1.433935137s 1.464002913s 1.468791807s 1.473333045s 1.473775808s 1.478960153s 1.482255726s 1.483670812s 1.485595724s 1.488298943s 1.490067025s 1.493812366s 1.50629474s 1.512688244s 1.544945827s 1.547253485s 1.567664395s 1.570669384s 1.57145699s 1.576689304s 1.578058564s 1.58687122s 1.588254388s 1.595247171s 1.59687335s 1.598124994s 1.609231084s 1.609354188s 1.611196691s 1.611453289s 1.614645652s 1.617549945s 1.621708786s 1.638241076s 1.638277846s 1.640450514s 1.644112615s 1.647871281s 1.649695345s 1.65695003s 1.66273952s 1.664341289s 1.669188772s 1.672643043s 1.680257573s 1.689753821s 1.690711958s 1.692646727s 1.696087622s 1.696883818s 1.698171139s 1.699168949s 1.703462825s 1.705136852s 1.706288791s 1.708116608s 1.711199587s 1.715350094s 1.71641171s 1.716998472s 1.717081375s 1.719736659s 1.723049431s 1.726902704s 1.736331832s 1.741500213s 1.745506404s 1.747484006s 1.747500312s 1.75754337s 1.75802511s 1.759168888s 1.760767841s 1.76412992s 1.764596784s 1.768019088s 1.77028533s 1.77031279s 1.774064796s 1.775358452s 1.777887505s 1.779325846s 1.784532801s 1.784948401s 1.788365795s 1.788862051s 1.793166367s 1.797485775s 1.79835771s 1.799549092s 1.800331858s 1.80263556s 1.803077427s 1.805320112s 1.8092872s 1.813324143s 1.815383472s 1.826166811s 1.826802112s 1.827367183s 1.827690829s 1.828356151s 1.829359608s 1.834955795s 1.83601643s 1.837907219s 1.846758435s 1.846818313s 1.847042494s 1.854666272s 1.85827811s 1.861391723s 1.862389187s 1.864689423s 1.868196163s 1.869558568s 1.871710416s 1.873849235s 1.881966877s 1.911027822s 1.931913302s 1.942938679s 1.944612692s 1.952668273s 1.987110753s 1.9949491s 2.003542477s 2.049044132s 2.057154467s 2.066034855s 2.087605723s 2.091391876s 2.101596198s 2.104342048s 2.12682987s 2.126986868s 2.127375453s 2.148152729s 2.15672904s 2.227375463s 2.278394596s 2.293976445s 2.40716201s]
Feb  9 13:14:52.667: INFO: 50 %ile: 1.672643043s
Feb  9 13:14:52.668: INFO: 90 %ile: 1.952668273s
Feb  9 13:14:52.668: INFO: 99 %ile: 2.293976445s
Feb  9 13:14:52.668: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 13:14:52.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-925" for this suite.
Feb  9 13:15:42.835: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 13:15:43.038: INFO: namespace svc-latency-925 deletion completed in 50.357316036s

• [SLOW TEST:81.678 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 13:15:43.038: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  9 13:15:43.166: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f7e15543-99fe-45ec-959f-c6d3e4bdd7ea" in namespace "downward-api-8769" to be "success or failure"
Feb  9 13:15:43.178: INFO: Pod "downwardapi-volume-f7e15543-99fe-45ec-959f-c6d3e4bdd7ea": Phase="Pending", Reason="", readiness=false. Elapsed: 12.078246ms
Feb  9 13:15:45.188: INFO: Pod "downwardapi-volume-f7e15543-99fe-45ec-959f-c6d3e4bdd7ea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022575596s
Feb  9 13:15:47.199: INFO: Pod "downwardapi-volume-f7e15543-99fe-45ec-959f-c6d3e4bdd7ea": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03275069s
Feb  9 13:15:49.210: INFO: Pod "downwardapi-volume-f7e15543-99fe-45ec-959f-c6d3e4bdd7ea": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044057946s
Feb  9 13:15:51.218: INFO: Pod "downwardapi-volume-f7e15543-99fe-45ec-959f-c6d3e4bdd7ea": Phase="Pending", Reason="", readiness=false. Elapsed: 8.05248757s
Feb  9 13:15:53.226: INFO: Pod "downwardapi-volume-f7e15543-99fe-45ec-959f-c6d3e4bdd7ea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.059799709s
STEP: Saw pod success
Feb  9 13:15:53.226: INFO: Pod "downwardapi-volume-f7e15543-99fe-45ec-959f-c6d3e4bdd7ea" satisfied condition "success or failure"
Feb  9 13:15:53.236: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-f7e15543-99fe-45ec-959f-c6d3e4bdd7ea container client-container: 
STEP: delete the pod
Feb  9 13:15:53.422: INFO: Waiting for pod downwardapi-volume-f7e15543-99fe-45ec-959f-c6d3e4bdd7ea to disappear
Feb  9 13:15:53.433: INFO: Pod downwardapi-volume-f7e15543-99fe-45ec-959f-c6d3e4bdd7ea no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 13:15:53.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8769" for this suite.
Feb  9 13:15:59.469: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 13:15:59.593: INFO: namespace downward-api-8769 deletion completed in 6.150572869s

• [SLOW TEST:16.555 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 13:15:59.593: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap that has name configmap-test-emptyKey-58ddb065-3d9f-4f86-96a4-bfca06af4c44
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 13:15:59.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9715" for this suite.
Feb  9 13:16:05.732: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 13:16:05.917: INFO: namespace configmap-9715 deletion completed in 6.208683491s

• [SLOW TEST:6.324 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 13:16:05.919: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name secret-emptykey-test-6460c704-9ce4-47ff-b267-ed7eb592d050
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 13:16:05.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-262" for this suite.
Feb  9 13:16:12.086: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 13:16:12.257: INFO: namespace secrets-262 deletion completed in 6.216072145s

• [SLOW TEST:6.339 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 13:16:12.258: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 13:16:20.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-6029" for this suite.
Feb  9 13:16:26.474: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 13:16:26.627: INFO: namespace kubelet-test-6029 deletion completed in 6.197186634s

• [SLOW TEST:14.369 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 13:16:26.627: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 13:16:34.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-1837" for this suite.
Feb  9 13:17:36.899: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 13:17:37.040: INFO: namespace kubelet-test-1837 deletion completed in 1m2.163534067s

• [SLOW TEST:70.413 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 13:17:37.040: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-7k8ln in namespace proxy-9846
I0209 13:17:37.227111       8 runners.go:180] Created replication controller with name: proxy-service-7k8ln, namespace: proxy-9846, replica count: 1
I0209 13:17:38.278238       8 runners.go:180] proxy-service-7k8ln Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0209 13:17:39.278847       8 runners.go:180] proxy-service-7k8ln Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0209 13:17:40.279379       8 runners.go:180] proxy-service-7k8ln Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0209 13:17:41.279969       8 runners.go:180] proxy-service-7k8ln Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0209 13:17:42.280401       8 runners.go:180] proxy-service-7k8ln Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0209 13:17:43.280880       8 runners.go:180] proxy-service-7k8ln Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0209 13:17:44.281312       8 runners.go:180] proxy-service-7k8ln Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0209 13:17:45.281786       8 runners.go:180] proxy-service-7k8ln Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0209 13:17:46.282462       8 runners.go:180] proxy-service-7k8ln Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0209 13:17:47.283067       8 runners.go:180] proxy-service-7k8ln Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0209 13:17:48.283856       8 runners.go:180] proxy-service-7k8ln Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0209 13:17:49.284661       8 runners.go:180] proxy-service-7k8ln Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0209 13:17:50.285492       8 runners.go:180] proxy-service-7k8ln Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0209 13:17:51.286109       8 runners.go:180] proxy-service-7k8ln Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0209 13:17:52.287207       8 runners.go:180] proxy-service-7k8ln Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0209 13:17:53.287979       8 runners.go:180] proxy-service-7k8ln Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0209 13:17:54.288641       8 runners.go:180] proxy-service-7k8ln Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0209 13:17:55.289268       8 runners.go:180] proxy-service-7k8ln Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb  9 13:17:55.303: INFO: setup took 18.132352233s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Feb  9 13:17:55.397: INFO: (0) /api/v1/namespaces/proxy-9846/pods/http:proxy-service-7k8ln-4gb4m:1080/proxy/: ... (200; 93.375145ms)
Feb  9 13:17:55.397: INFO: (0) /api/v1/namespaces/proxy-9846/pods/http:proxy-service-7k8ln-4gb4m:160/proxy/: foo (200; 93.083449ms)
Feb  9 13:17:55.397: INFO: (0) /api/v1/namespaces/proxy-9846/pods/http:proxy-service-7k8ln-4gb4m:162/proxy/: bar (200; 93.265478ms)
Feb  9 13:17:55.397: INFO: (0) /api/v1/namespaces/proxy-9846/pods/proxy-service-7k8ln-4gb4m:162/proxy/: bar (200; 93.508669ms)
Feb  9 13:17:55.397: INFO: (0) /api/v1/namespaces/proxy-9846/services/http:proxy-service-7k8ln:portname1/proxy/: foo (200; 94.039226ms)
Feb  9 13:17:55.397: INFO: (0) /api/v1/namespaces/proxy-9846/services/proxy-service-7k8ln:portname1/proxy/: foo (200; 93.954753ms)
Feb  9 13:17:55.398: INFO: (0) /api/v1/namespaces/proxy-9846/pods/proxy-service-7k8ln-4gb4m:1080/proxy/: test<... (200; 94.395178ms)
Feb  9 13:17:55.404: INFO: (0) /api/v1/namespaces/proxy-9846/pods/proxy-service-7k8ln-4gb4m/proxy/: test (200; 100.480922ms)
Feb  9 13:17:55.404: INFO: (0) /api/v1/namespaces/proxy-9846/services/proxy-service-7k8ln:portname2/proxy/: bar (200; 100.624483ms)
Feb  9 13:17:55.404: INFO: (0) /api/v1/namespaces/proxy-9846/pods/proxy-service-7k8ln-4gb4m:160/proxy/: foo (200; 101.025746ms)
Feb  9 13:17:55.404: INFO: (0) /api/v1/namespaces/proxy-9846/services/http:proxy-service-7k8ln:portname2/proxy/: bar (200; 101.301801ms)
Feb  9 13:17:55.410: INFO: (0) /api/v1/namespaces/proxy-9846/pods/https:proxy-service-7k8ln-4gb4m:462/proxy/: tls qux (200; 106.797293ms)
Feb  9 13:17:55.410: INFO: (0) /api/v1/namespaces/proxy-9846/services/https:proxy-service-7k8ln:tlsportname1/proxy/: tls baz (200; 107.08844ms)
Feb  9 13:17:55.411: INFO: (0) /api/v1/namespaces/proxy-9846/pods/https:proxy-service-7k8ln-4gb4m:460/proxy/: tls baz (200; 107.746769ms)
Feb  9 13:17:55.411: INFO: (0) /api/v1/namespaces/proxy-9846/services/https:proxy-service-7k8ln:tlsportname2/proxy/: tls qux (200; 108.193623ms)
Feb  9 13:17:55.419: INFO: (0) /api/v1/namespaces/proxy-9846/pods/https:proxy-service-7k8ln-4gb4m:443/proxy/: ... (200; 19.467793ms)
Feb  9 13:17:55.440: INFO: (1) /api/v1/namespaces/proxy-9846/pods/proxy-service-7k8ln-4gb4m:160/proxy/: foo (200; 19.999047ms)
Feb  9 13:17:55.440: INFO: (1) /api/v1/namespaces/proxy-9846/pods/https:proxy-service-7k8ln-4gb4m:460/proxy/: tls baz (200; 20.015315ms)
Feb  9 13:17:55.440: INFO: (1) /api/v1/namespaces/proxy-9846/pods/proxy-service-7k8ln-4gb4m:1080/proxy/: test<... (200; 21.048688ms)
Feb  9 13:17:55.441: INFO: (1) /api/v1/namespaces/proxy-9846/pods/https:proxy-service-7k8ln-4gb4m:443/proxy/: test (200; 21.556727ms)
Feb  9 13:17:55.441: INFO: (1) /api/v1/namespaces/proxy-9846/pods/http:proxy-service-7k8ln-4gb4m:162/proxy/: bar (200; 21.596755ms)
Feb  9 13:17:55.441: INFO: (1) /api/v1/namespaces/proxy-9846/pods/http:proxy-service-7k8ln-4gb4m:160/proxy/: foo (200; 21.557835ms)
Feb  9 13:17:55.441: INFO: (1) /api/v1/namespaces/proxy-9846/pods/https:proxy-service-7k8ln-4gb4m:462/proxy/: tls qux (200; 21.847689ms)
Feb  9 13:17:55.448: INFO: (1) /api/v1/namespaces/proxy-9846/services/proxy-service-7k8ln:portname2/proxy/: bar (200; 28.944231ms)
Feb  9 13:17:55.448: INFO: (1) /api/v1/namespaces/proxy-9846/services/proxy-service-7k8ln:portname1/proxy/: foo (200; 28.793855ms)
Feb  9 13:17:55.448: INFO: (1) /api/v1/namespaces/proxy-9846/services/http:proxy-service-7k8ln:portname2/proxy/: bar (200; 28.857326ms)
Feb  9 13:17:55.450: INFO: (1) /api/v1/namespaces/proxy-9846/services/https:proxy-service-7k8ln:tlsportname2/proxy/: tls qux (200; 30.301995ms)
Feb  9 13:17:55.450: INFO: (1) /api/v1/namespaces/proxy-9846/services/https:proxy-service-7k8ln:tlsportname1/proxy/: tls baz (200; 30.381235ms)
Feb  9 13:17:55.451: INFO: (1) /api/v1/namespaces/proxy-9846/services/http:proxy-service-7k8ln:portname1/proxy/: foo (200; 31.663695ms)
Feb  9 13:17:55.467: INFO: (2) /api/v1/namespaces/proxy-9846/pods/proxy-service-7k8ln-4gb4m/proxy/: test (200; 15.062372ms)
Feb  9 13:17:55.467: INFO: (2) /api/v1/namespaces/proxy-9846/pods/proxy-service-7k8ln-4gb4m:1080/proxy/: test<... (200; 15.377117ms)
Feb  9 13:17:55.467: INFO: (2) /api/v1/namespaces/proxy-9846/services/proxy-service-7k8ln:portname2/proxy/: bar (200; 15.492402ms)
Feb  9 13:17:55.467: INFO: (2) /api/v1/namespaces/proxy-9846/pods/http:proxy-service-7k8ln-4gb4m:162/proxy/: bar (200; 15.741694ms)
Feb  9 13:17:55.467: INFO: (2) /api/v1/namespaces/proxy-9846/services/http:proxy-service-7k8ln:portname1/proxy/: foo (200; 16.074483ms)
Feb  9 13:17:55.467: INFO: (2) /api/v1/namespaces/proxy-9846/pods/proxy-service-7k8ln-4gb4m:160/proxy/: foo (200; 16.07838ms)
Feb  9 13:17:55.468: INFO: (2) /api/v1/namespaces/proxy-9846/pods/https:proxy-service-7k8ln-4gb4m:462/proxy/: tls qux (200; 16.095095ms)
Feb  9 13:17:55.468: INFO: (2) /api/v1/namespaces/proxy-9846/pods/https:proxy-service-7k8ln-4gb4m:460/proxy/: tls baz (200; 15.925536ms)
Feb  9 13:17:55.468: INFO: (2) /api/v1/namespaces/proxy-9846/services/https:proxy-service-7k8ln:tlsportname2/proxy/: tls qux (200; 16.223642ms)
Feb  9 13:17:55.468: INFO: (2) /api/v1/namespaces/proxy-9846/pods/https:proxy-service-7k8ln-4gb4m:443/proxy/: ... (200; 18.157356ms)
Feb  9 13:17:55.470: INFO: (2) /api/v1/namespaces/proxy-9846/services/proxy-service-7k8ln:portname1/proxy/: foo (200; 17.943806ms)
Feb  9 13:17:55.470: INFO: (2) /api/v1/namespaces/proxy-9846/services/http:proxy-service-7k8ln:portname2/proxy/: bar (200; 18.094144ms)
Feb  9 13:17:55.488: INFO: (3) /api/v1/namespaces/proxy-9846/pods/http:proxy-service-7k8ln-4gb4m:162/proxy/: bar (200; 17.760335ms)
Feb  9 13:17:55.488: INFO: (3) /api/v1/namespaces/proxy-9846/pods/proxy-service-7k8ln-4gb4m:1080/proxy/: test<... (200; 18.176256ms)
Feb  9 13:17:55.488: INFO: (3) /api/v1/namespaces/proxy-9846/pods/proxy-service-7k8ln-4gb4m:162/proxy/: bar (200; 17.653661ms)
Feb  9 13:17:55.491: INFO: (3) /api/v1/namespaces/proxy-9846/services/http:proxy-service-7k8ln:portname1/proxy/: foo (200; 21.331519ms)
Feb  9 13:17:55.491: INFO: (3) /api/v1/namespaces/proxy-9846/pods/https:proxy-service-7k8ln-4gb4m:443/proxy/: ... (200; 22.263431ms)
Feb  9 13:17:55.492: INFO: (3) /api/v1/namespaces/proxy-9846/services/https:proxy-service-7k8ln:tlsportname1/proxy/: tls baz (200; 22.519529ms)
Feb  9 13:17:55.494: INFO: (3) /api/v1/namespaces/proxy-9846/services/proxy-service-7k8ln:portname1/proxy/: foo (200; 24.384587ms)
Feb  9 13:17:55.494: INFO: (3) /api/v1/namespaces/proxy-9846/services/http:proxy-service-7k8ln:portname2/proxy/: bar (200; 24.27684ms)
Feb  9 13:17:55.495: INFO: (3) /api/v1/namespaces/proxy-9846/pods/proxy-service-7k8ln-4gb4m/proxy/: test (200; 24.559657ms)
Feb  9 13:17:55.495: INFO: (3) /api/v1/namespaces/proxy-9846/services/https:proxy-service-7k8ln:tlsportname2/proxy/: tls qux (200; 24.813289ms)
Feb  9 13:17:55.495: INFO: (3) /api/v1/namespaces/proxy-9846/pods/https:proxy-service-7k8ln-4gb4m:462/proxy/: tls qux (200; 24.976326ms)
Feb  9 13:17:55.496: INFO: (3) /api/v1/namespaces/proxy-9846/pods/https:proxy-service-7k8ln-4gb4m:460/proxy/: tls baz (200; 25.991446ms)
Feb  9 13:17:55.496: INFO: (3) /api/v1/namespaces/proxy-9846/pods/http:proxy-service-7k8ln-4gb4m:160/proxy/: foo (200; 25.925652ms)
Feb  9 13:17:55.496: INFO: (3) /api/v1/namespaces/proxy-9846/pods/proxy-service-7k8ln-4gb4m:160/proxy/: foo (200; 25.962338ms)
Feb  9 13:17:55.508: INFO: (4) /api/v1/namespaces/proxy-9846/pods/proxy-service-7k8ln-4gb4m:162/proxy/: bar (200; 11.823909ms)
Feb  9 13:17:55.508: INFO: (4) /api/v1/namespaces/proxy-9846/pods/proxy-service-7k8ln-4gb4m/proxy/: test (200; 11.833278ms)
Feb  9 13:17:55.508: INFO: (4) /api/v1/namespaces/proxy-9846/pods/http:proxy-service-7k8ln-4gb4m:1080/proxy/: ... (200; 11.879628ms)
Feb  9 13:17:55.509: INFO: (4) /api/v1/namespaces/proxy-9846/pods/https:proxy-service-7k8ln-4gb4m:443/proxy/: test<... (200; 13.539917ms)
Feb  9 13:17:55.510: INFO: (4) /api/v1/namespaces/proxy-9846/pods/https:proxy-service-7k8ln-4gb4m:462/proxy/: tls qux (200; 13.350638ms)
Feb  9 13:17:55.515: INFO: (4) /api/v1/namespaces/proxy-9846/services/proxy-service-7k8ln:portname2/proxy/: bar (200; 18.662997ms)
Feb  9 13:17:55.515: INFO: (4) /api/v1/namespaces/proxy-9846/services/http:proxy-service-7k8ln:portname2/proxy/: bar (200; 18.903836ms)
Feb  9 13:17:55.515: INFO: (4) /api/v1/namespaces/proxy-9846/services/proxy-service-7k8ln:portname1/proxy/: foo (200; 18.867574ms)
Feb  9 13:17:55.515: INFO: (4) /api/v1/namespaces/proxy-9846/pods/http:proxy-service-7k8ln-4gb4m:160/proxy/: foo (200; 19.210686ms)
Feb  9 13:17:55.516: INFO: (4) /api/v1/namespaces/proxy-9846/services/https:proxy-service-7k8ln:tlsportname2/proxy/: tls qux (200; 19.242029ms)
Feb  9 13:17:55.516: INFO: (4) /api/v1/namespaces/proxy-9846/services/https:proxy-service-7k8ln:tlsportname1/proxy/: tls baz (200; 19.515578ms)
Feb  9 13:17:55.516: INFO: (4) /api/v1/namespaces/proxy-9846/services/http:proxy-service-7k8ln:portname1/proxy/: foo (200; 19.719399ms)
Feb  9 13:17:55.527: INFO: (5) /api/v1/namespaces/proxy-9846/pods/http:proxy-service-7k8ln-4gb4m:160/proxy/: foo (200; 10.480666ms)
Feb  9 13:17:55.527: INFO: (5) /api/v1/namespaces/proxy-9846/pods/http:proxy-service-7k8ln-4gb4m:1080/proxy/: ... (200; 10.516674ms)
Feb  9 13:17:55.527: INFO: (5) /api/v1/namespaces/proxy-9846/pods/proxy-service-7k8ln-4gb4m:162/proxy/: bar (200; 10.463069ms)
Feb  9 13:17:55.527: INFO: (5) /api/v1/namespaces/proxy-9846/services/http:proxy-service-7k8ln:portname2/proxy/: bar (200; 11.30461ms)
Feb  9 13:17:55.528: INFO: (5) /api/v1/namespaces/proxy-9846/services/proxy-service-7k8ln:portname2/proxy/: bar (200; 11.091708ms)
Feb  9 13:17:55.528: INFO: (5) /api/v1/namespaces/proxy-9846/pods/proxy-service-7k8ln-4gb4m:160/proxy/: foo (200; 11.252498ms)
Feb  9 13:17:55.529: INFO: (5) /api/v1/namespaces/proxy-9846/services/proxy-service-7k8ln:portname1/proxy/: foo (200; 12.434972ms)
Feb  9 13:17:55.530: INFO: (5) /api/v1/namespaces/proxy-9846/services/http:proxy-service-7k8ln:portname1/proxy/: foo (200; 13.559595ms)
Feb  9 13:17:55.531: INFO: (5) /api/v1/namespaces/proxy-9846/services/https:proxy-service-7k8ln:tlsportname1/proxy/: tls baz (200; 14.113935ms)
Feb  9 13:17:55.531: INFO: (5) /api/v1/namespaces/proxy-9846/pods/proxy-service-7k8ln-4gb4m:1080/proxy/: test<... (200; 14.162419ms)
Feb  9 13:17:55.531: INFO: (5) /api/v1/namespaces/proxy-9846/pods/https:proxy-service-7k8ln-4gb4m:460/proxy/: tls baz (200; 14.284057ms)
Feb  9 13:17:55.531: INFO: (5) /api/v1/namespaces/proxy-9846/pods/proxy-service-7k8ln-4gb4m/proxy/: test (200; 14.404601ms)
Feb  9 13:17:55.531: INFO: (5) /api/v1/namespaces/proxy-9846/pods/https:proxy-service-7k8ln-4gb4m:443/proxy/: ... (200; 3.908246ms)
Feb  9 13:17:55.536: INFO: (6) /api/v1/namespaces/proxy-9846/pods/http:proxy-service-7k8ln-4gb4m:162/proxy/: bar (200; 3.992938ms)
Feb  9 13:17:55.536: INFO: (6) /api/v1/namespaces/proxy-9846/pods/proxy-service-7k8ln-4gb4m:162/proxy/: bar (200; 4.879748ms)
Feb  9 13:17:55.538: INFO: (6) /api/v1/namespaces/proxy-9846/pods/http:proxy-service-7k8ln-4gb4m:160/proxy/: foo (200; 6.419269ms)
Feb  9 13:17:55.538: INFO: (6) /api/v1/namespaces/proxy-9846/pods/https:proxy-service-7k8ln-4gb4m:462/proxy/: tls qux (200; 6.460021ms)
Feb  9 13:17:55.538: INFO: (6) /api/v1/namespaces/proxy-9846/pods/proxy-service-7k8ln-4gb4m:1080/proxy/: test<... (200; 6.573423ms)
Feb  9 13:17:55.538: INFO: (6) /api/v1/namespaces/proxy-9846/pods/https:proxy-service-7k8ln-4gb4m:443/proxy/: test (200; 7.103924ms)
Feb  9 13:17:55.539: INFO: (6) /api/v1/namespaces/proxy-9846/pods/proxy-service-7k8ln-4gb4m:160/proxy/: foo (200; 7.417878ms)
Feb  9 13:17:55.539: INFO: (6) /api/v1/namespaces/proxy-9846/pods/https:proxy-service-7k8ln-4gb4m:460/proxy/: tls baz (200; 7.645381ms)
Feb  9 13:17:55.543: INFO: (6) /api/v1/namespaces/proxy-9846/services/https:proxy-service-7k8ln:tlsportname2/proxy/: tls qux (200; 11.800651ms)
Feb  9 13:17:55.544: INFO: (6) /api/v1/namespaces/proxy-9846/services/proxy-service-7k8ln:portname2/proxy/: bar (200; 12.538021ms)
Feb  9 13:17:55.544: INFO: (6) /api/v1/namespaces/proxy-9846/services/http:proxy-service-7k8ln:portname2/proxy/: bar (200; 12.447567ms)
Feb  9 13:17:55.545: INFO: (6) /api/v1/namespaces/proxy-9846/services/proxy-service-7k8ln:portname1/proxy/: foo (200; 12.823479ms)
Feb  9 13:17:55.545: INFO: (6) /api/v1/namespaces/proxy-9846/services/http:proxy-service-7k8ln:portname1/proxy/: foo (200; 13.635361ms)
Feb  9 13:17:55.546: INFO: (6) /api/v1/namespaces/proxy-9846/services/https:proxy-service-7k8ln:tlsportname1/proxy/: tls baz (200; 14.393827ms)
Feb  9 13:17:55.549: INFO: (7) /api/v1/namespaces/proxy-9846/pods/https:proxy-service-7k8ln-4gb4m:460/proxy/: tls baz (200; 3.11325ms)
Feb  9 13:17:55.551: INFO: (7) /api/v1/namespaces/proxy-9846/pods/proxy-service-7k8ln-4gb4m:1080/proxy/: test<... (200; 4.580159ms)
Feb  9 13:17:55.551: INFO: (7) /api/v1/namespaces/proxy-9846/pods/https:proxy-service-7k8ln-4gb4m:462/proxy/: tls qux (200; 5.229794ms)
Feb  9 13:17:55.551: INFO: (7) /api/v1/namespaces/proxy-9846/pods/https:proxy-service-7k8ln-4gb4m:443/proxy/: ... (200; 10.512616ms)
Feb  9 13:17:55.558: INFO: (7) /api/v1/namespaces/proxy-9846/services/http:proxy-service-7k8ln:portname2/proxy/: bar (200; 12.224237ms)
Feb  9 13:17:55.559: INFO: (7) /api/v1/namespaces/proxy-9846/pods/proxy-service-7k8ln-4gb4m/proxy/: test (200; 12.861769ms)
Feb  9 13:17:55.559: INFO: (7) /api/v1/namespaces/proxy-9846/services/https:proxy-service-7k8ln:tlsportname2/proxy/: tls qux (200; 12.769309ms)
Feb  9 13:17:55.561: INFO: (7) /api/v1/namespaces/proxy-9846/pods/http:proxy-service-7k8ln-4gb4m:160/proxy/: foo (200; 14.533057ms)
Feb  9 13:17:55.561: INFO: (7) /api/v1/namespaces/proxy-9846/pods/proxy-service-7k8ln-4gb4m:162/proxy/: bar (200; 14.510131ms)
Feb  9 13:17:55.561: INFO: (7) /api/v1/namespaces/proxy-9846/services/proxy-service-7k8ln:portname2/proxy/: bar (200; 14.653054ms)
Feb  9 13:17:55.561: INFO: (7) /api/v1/namespaces/proxy-9846/pods/proxy-service-7k8ln-4gb4m:160/proxy/: foo (200; 14.719996ms)
Feb  9 13:17:55.561: INFO: (7) /api/v1/namespaces/proxy-9846/services/https:proxy-service-7k8ln:tlsportname1/proxy/: tls baz (200; 14.632551ms)
Feb  9 13:17:55.562: INFO: (7) /api/v1/namespaces/proxy-9846/services/http:proxy-service-7k8ln:portname1/proxy/: foo (200; 16.191724ms)
Feb  9 13:17:55.562: INFO: (7) /api/v1/namespaces/proxy-9846/pods/http:proxy-service-7k8ln-4gb4m:162/proxy/: bar (200; 16.132583ms)
Feb  9 13:17:55.563: INFO: (7) /api/v1/namespaces/proxy-9846/services/proxy-service-7k8ln:portname1/proxy/: foo (200; 16.587358ms)
Feb  9 13:17:55.570: INFO: (8) /api/v1/namespaces/proxy-9846/pods/http:proxy-service-7k8ln-4gb4m:1080/proxy/: ... (200; 6.500958ms)
Feb  9 13:17:55.571: INFO: (8) /api/v1/namespaces/proxy-9846/pods/http:proxy-service-7k8ln-4gb4m:162/proxy/: bar (200; 6.942707ms)
Feb  9 13:17:55.571: INFO: (8) /api/v1/namespaces/proxy-9846/pods/proxy-service-7k8ln-4gb4m:1080/proxy/: test<... (200; 7.947827ms)
Feb  9 13:17:55.571: INFO: (8) /api/v1/namespaces/proxy-9846/pods/proxy-service-7k8ln-4gb4m/proxy/: test (200; 8.695693ms)
Feb  9 13:17:55.572: INFO: (8) /api/v1/namespaces/proxy-9846/pods/https:proxy-service-7k8ln-4gb4m:462/proxy/: tls qux (200; 7.419344ms)
Feb  9 13:17:55.572: INFO: (8) /api/v1/namespaces/proxy-9846/pods/https:proxy-service-7k8ln-4gb4m:460/proxy/: tls baz (200; 7.857936ms)
Feb  9 13:17:55.572: INFO: (8) /api/v1/namespaces/proxy-9846/pods/proxy-service-7k8ln-4gb4m:162/proxy/: bar (200; 7.500235ms)
Feb  9 13:17:55.572: INFO: (8) /api/v1/namespaces/proxy-9846/pods/https:proxy-service-7k8ln-4gb4m:443/proxy/: test (200; 9.361646ms)
Feb  9 13:17:55.587: INFO: (9) /api/v1/namespaces/proxy-9846/pods/proxy-service-7k8ln-4gb4m:162/proxy/: bar (200; 9.462702ms)
Feb  9 13:17:55.587: INFO: (9) /api/v1/namespaces/proxy-9846/pods/http:proxy-service-7k8ln-4gb4m:160/proxy/: foo (200; 9.446888ms)
Feb  9 13:17:55.588: INFO: (9) /api/v1/namespaces/proxy-9846/pods/http:proxy-service-7k8ln-4gb4m:1080/proxy/: ... (200; 9.83092ms)
Feb  9 13:17:55.588: INFO: (9) /api/v1/namespaces/proxy-9846/pods/https:proxy-service-7k8ln-4gb4m:460/proxy/: tls baz (200; 9.80296ms)
Feb  9 13:17:55.588: INFO: (9) /api/v1/namespaces/proxy-9846/pods/proxy-service-7k8ln-4gb4m:1080/proxy/: test<... (200; 10.096753ms)
Feb  9 13:17:55.590: INFO: (9) /api/v1/namespaces/proxy-9846/services/http:proxy-service-7k8ln:portname1/proxy/: foo (200; 12.531484ms)
Feb  9 13:17:55.591: INFO: (9) /api/v1/namespaces/proxy-9846/services/https:proxy-service-7k8ln:tlsportname1/proxy/: tls baz (200; 13.297839ms)
Feb  9 13:17:55.591: INFO: (9) /api/v1/namespaces/proxy-9846/services/https:proxy-service-7k8ln:tlsportname2/proxy/: tls qux (200; 13.265868ms)
Feb  9 13:17:55.591: INFO: (9) /api/v1/namespaces/proxy-9846/services/proxy-service-7k8ln:portname2/proxy/: bar (200; 13.21798ms)
Feb  9 13:17:55.591: INFO: (9) /api/v1/namespaces/proxy-9846/services/http:proxy-service-7k8ln:portname2/proxy/: bar (200; 13.321802ms)
Feb  9 13:17:55.591: INFO: (9) /api/v1/namespaces/proxy-9846/services/proxy-service-7k8ln:portname1/proxy/: foo (200; 13.412372ms)
Feb  9 13:17:55.606: INFO: (10) /api/v1/namespaces/proxy-9846/pods/proxy-service-7k8ln-4gb4m:160/proxy/: foo (200; 13.272733ms)
Feb  9 13:17:55.606: INFO: (10) /api/v1/namespaces/proxy-9846/pods/https:proxy-service-7k8ln-4gb4m:462/proxy/: tls qux (200; 13.396412ms)
Feb  9 13:17:55.606: INFO: (10) /api/v1/namespaces/proxy-9846/pods/https:proxy-service-7k8ln-4gb4m:443/proxy/: test (200; 14.077242ms)
Feb  9 13:17:55.607: INFO: (10) /api/v1/namespaces/proxy-9846/pods/http:proxy-service-7k8ln-4gb4m:1080/proxy/: ... (200; 14.758528ms)
Feb  9 13:17:55.607: INFO: (10) /api/v1/namespaces/proxy-9846/services/proxy-service-7k8ln:portname2/proxy/: bar (200; 15.376327ms)
Feb  9 13:17:55.607: INFO: (10) /api/v1/namespaces/proxy-9846/pods/proxy-service-7k8ln-4gb4m:1080/proxy/: test<... (200; 15.642647ms)
Feb  9 13:17:55.607: INFO: (10) /api/v1/namespaces/proxy-9846/pods/http:proxy-service-7k8ln-4gb4m:162/proxy/: bar (200; 15.309416ms)
Feb  9 13:17:55.608: INFO: (10) /api/v1/namespaces/proxy-9846/pods/https:proxy-service-7k8ln-4gb4m:460/proxy/: tls baz (200; 16.147078ms)
Feb  9 13:17:55.608: INFO: (10) /api/v1/namespaces/proxy-9846/pods/http:proxy-service-7k8ln-4gb4m:160/proxy/: foo (200; 16.041081ms)
Feb  9 13:17:55.608: INFO: (10) /api/v1/namespaces/proxy-9846/services/https:proxy-service-7k8ln:tlsportname1/proxy/: tls baz (200; 16.801483ms)
Feb  9 13:17:55.608: INFO: (10) /api/v1/namespaces/proxy-9846/services/proxy-service-7k8ln:portname1/proxy/: foo (200; 16.956358ms)
Feb  9 13:17:55.610: INFO: (10) /api/v1/namespaces/proxy-9846/services/http:proxy-service-7k8ln:portname2/proxy/: bar (200; 18.154853ms)
Feb  9 13:17:55.610: INFO: (10) /api/v1/namespaces/proxy-9846/services/https:proxy-service-7k8ln:tlsportname2/proxy/: tls qux (200; 18.719874ms)
Feb  9 13:17:55.611: INFO: (10) /api/v1/namespaces/proxy-9846/pods/proxy-service-7k8ln-4gb4m:162/proxy/: bar (200; 19.451667ms)
Feb  9 13:17:55.611: INFO: (10) /api/v1/namespaces/proxy-9846/services/http:proxy-service-7k8ln:portname1/proxy/: foo (200; 19.088978ms)
Feb  9 13:17:55.620: INFO: (11) /api/v1/namespaces/proxy-9846/pods/proxy-service-7k8ln-4gb4m:1080/proxy/: test<... (200; 8.444878ms)
Feb  9 13:17:55.620: INFO: (11) /api/v1/namespaces/proxy-9846/pods/http:proxy-service-7k8ln-4gb4m:160/proxy/: foo (200; 8.378318ms)
Feb  9 13:17:55.620: INFO: (11) /api/v1/namespaces/proxy-9846/pods/https:proxy-service-7k8ln-4gb4m:460/proxy/: tls baz (200; 8.88195ms)
Feb  9 13:17:55.621: INFO: (11) /api/v1/namespaces/proxy-9846/services/https:proxy-service-7k8ln:tlsportname1/proxy/: tls baz (200; 9.891688ms)
Feb  9 13:17:55.621: INFO: (11) /api/v1/namespaces/proxy-9846/services/proxy-service-7k8ln:portname2/proxy/: bar (200; 10.122555ms)
Feb  9 13:17:55.624: INFO: (11) /api/v1/namespaces/proxy-9846/pods/proxy-service-7k8ln-4gb4m/proxy/: test (200; 13.107757ms)
Feb  9 13:17:55.625: INFO: (11) /api/v1/namespaces/proxy-9846/services/proxy-service-7k8ln:portname1/proxy/: foo (200; 13.646632ms)
Feb  9 13:17:55.625: INFO: (11) /api/v1/namespaces/proxy-9846/services/https:proxy-service-7k8ln:tlsportname2/proxy/: tls qux (200; 13.651708ms)
Feb  9 13:17:55.625: INFO: (11) /api/v1/namespaces/proxy-9846/services/http:proxy-service-7k8ln:portname1/proxy/: foo (200; 13.592233ms)
Feb  9 13:17:55.625: INFO: (11) /api/v1/namespaces/proxy-9846/services/http:proxy-service-7k8ln:portname2/proxy/: bar (200; 13.737421ms)
Feb  9 13:17:55.626: INFO: (11) /api/v1/namespaces/proxy-9846/pods/http:proxy-service-7k8ln-4gb4m:1080/proxy/: ... (200; 14.731948ms)
Feb  9 13:17:55.626: INFO: (11) /api/v1/namespaces/proxy-9846/pods/proxy-service-7k8ln-4gb4m:162/proxy/: bar (200; 14.752049ms)
Feb  9 13:17:55.627: INFO: (11) /api/v1/namespaces/proxy-9846/pods/proxy-service-7k8ln-4gb4m:160/proxy/: foo (200; 15.614462ms)
Feb  9 13:17:55.627: INFO: (11) /api/v1/namespaces/proxy-9846/pods/https:proxy-service-7k8ln-4gb4m:462/proxy/: tls qux (200; 15.95718ms)
Feb  9 13:17:55.627: INFO: (11) /api/v1/namespaces/proxy-9846/pods/http:proxy-service-7k8ln-4gb4m:162/proxy/: bar (200; 15.935596ms)
Feb  9 13:17:55.627: INFO: (11) /api/v1/namespaces/proxy-9846/pods/https:proxy-service-7k8ln-4gb4m:443/proxy/: test (200; 10.550212ms)
Feb  9 13:17:55.638: INFO: (12) /api/v1/namespaces/proxy-9846/pods/http:proxy-service-7k8ln-4gb4m:1080/proxy/: ... (200; 10.626367ms)
Feb  9 13:17:55.638: INFO: (12) /api/v1/namespaces/proxy-9846/pods/proxy-service-7k8ln-4gb4m:1080/proxy/: test<... (200; 10.731907ms)
Feb  9 13:17:55.639: INFO: (12) /api/v1/namespaces/proxy-9846/pods/http:proxy-service-7k8ln-4gb4m:162/proxy/: bar (200; 11.225939ms)
Feb  9 13:17:55.639: INFO: (12) /api/v1/namespaces/proxy-9846/pods/https:proxy-service-7k8ln-4gb4m:462/proxy/: tls qux (200; 11.689324ms)
Feb  9 13:17:55.640: INFO: (12) /api/v1/namespaces/proxy-9846/pods/proxy-service-7k8ln-4gb4m:160/proxy/: foo (200; 12.250171ms)
Feb  9 13:17:55.640: INFO: (12) /api/v1/namespaces/proxy-9846/pods/http:proxy-service-7k8ln-4gb4m:160/proxy/: foo (200; 12.357757ms)
Feb  9 13:17:55.641: INFO: (12) /api/v1/namespaces/proxy-9846/services/https:proxy-service-7k8ln:tlsportname1/proxy/: tls baz (200; 13.290008ms)
Feb  9 13:17:55.641: INFO: (12) /api/v1/namespaces/proxy-9846/services/proxy-service-7k8ln:portname1/proxy/: foo (200; 13.691118ms)
Feb  9 13:17:55.641: INFO: (12) /api/v1/namespaces/proxy-9846/services/http:proxy-service-7k8ln:portname1/proxy/: foo (200; 13.736397ms)
Feb  9 13:17:55.642: INFO: (12) /api/v1/namespaces/proxy-9846/services/http:proxy-service-7k8ln:portname2/proxy/: bar (200; 14.252178ms)
Feb  9 13:17:55.642: INFO: (12) /api/v1/namespaces/proxy-9846/services/proxy-service-7k8ln:portname2/proxy/: bar (200; 14.544632ms)
Feb  9 13:17:55.642: INFO: (12) /api/v1/namespaces/proxy-9846/services/https:proxy-service-7k8ln:tlsportname2/proxy/: tls qux (200; 14.97433ms)
Feb  9 13:17:55.650: INFO: (13) /api/v1/namespaces/proxy-9846/pods/http:proxy-service-7k8ln-4gb4m:160/proxy/: foo (200; 7.637537ms)
Feb  9 13:17:55.652: INFO: (13) /api/v1/namespaces/proxy-9846/pods/https:proxy-service-7k8ln-4gb4m:460/proxy/: tls baz (200; 9.383045ms)
Feb  9 13:17:55.652: INFO: (13) /api/v1/namespaces/proxy-9846/pods/proxy-service-7k8ln-4gb4m:162/proxy/: bar (200; 9.375502ms)
Feb  9 13:17:55.652: INFO: (13) /api/v1/namespaces/proxy-9846/pods/http:proxy-service-7k8ln-4gb4m:162/proxy/: bar (200; 9.721004ms)
Feb  9 13:17:55.652: INFO: (13) /api/v1/namespaces/proxy-9846/pods/proxy-service-7k8ln-4gb4m:1080/proxy/: test<... (200; 10.013088ms)
Feb  9 13:17:55.653: INFO: (13) /api/v1/namespaces/proxy-9846/pods/https:proxy-service-7k8ln-4gb4m:443/proxy/: test (200; 10.400634ms)
Feb  9 13:17:55.653: INFO: (13) /api/v1/namespaces/proxy-9846/services/http:proxy-service-7k8ln:portname2/proxy/: bar (200; 10.873459ms)
Feb  9 13:17:55.653: INFO: (13) /api/v1/namespaces/proxy-9846/services/https:proxy-service-7k8ln:tlsportname1/proxy/: tls baz (200; 10.860511ms)
Feb  9 13:17:55.654: INFO: (13) /api/v1/namespaces/proxy-9846/pods/http:proxy-service-7k8ln-4gb4m:1080/proxy/: ... (200; 11.980332ms)
Feb  9 13:17:55.654: INFO: (13) /api/v1/namespaces/proxy-9846/services/https:proxy-service-7k8ln:tlsportname2/proxy/: tls qux (200; 11.890499ms)
Feb  9 13:17:55.654: INFO: (13) /api/v1/namespaces/proxy-9846/pods/https:proxy-service-7k8ln-4gb4m:462/proxy/: tls qux (200; 12.011561ms)
Feb  9 13:17:55.656: INFO: (13) /api/v1/namespaces/proxy-9846/services/http:proxy-service-7k8ln:portname1/proxy/: foo (200; 13.304675ms)
Feb  9 13:17:55.656: INFO: (13) /api/v1/namespaces/proxy-9846/services/proxy-service-7k8ln:portname1/proxy/: foo (200; 13.514579ms)
Feb  9 13:17:55.656: INFO: (13) /api/v1/namespaces/proxy-9846/services/proxy-service-7k8ln:portname2/proxy/: bar (200; 13.528423ms)
Feb  9 13:17:55.664: INFO: (14) /api/v1/namespaces/proxy-9846/pods/proxy-service-7k8ln-4gb4m/proxy/: test (200; 7.813436ms)
Feb  9 13:17:55.664: INFO: (14) /api/v1/namespaces/proxy-9846/services/proxy-service-7k8ln:portname1/proxy/: foo (200; 7.776314ms)
Feb  9 13:17:55.664: INFO: (14) /api/v1/namespaces/proxy-9846/pods/proxy-service-7k8ln-4gb4m:162/proxy/: bar (200; 7.843393ms)
Feb  9 13:17:55.664: INFO: (14) /api/v1/namespaces/proxy-9846/pods/proxy-service-7k8ln-4gb4m:1080/proxy/: test<... (200; 7.816847ms)
Feb  9 13:17:55.666: INFO: (14) /api/v1/namespaces/proxy-9846/pods/https:proxy-service-7k8ln-4gb4m:443/proxy/: ... (200; 10.238675ms)
Feb  9 13:17:55.667: INFO: (14) /api/v1/namespaces/proxy-9846/services/https:proxy-service-7k8ln:tlsportname1/proxy/: tls baz (200; 11.38199ms)
Feb  9 13:17:55.668: INFO: (14) /api/v1/namespaces/proxy-9846/pods/proxy-service-7k8ln-4gb4m:160/proxy/: foo (200; 11.530451ms)
Feb  9 13:17:55.668: INFO: (14) /api/v1/namespaces/proxy-9846/services/proxy-service-7k8ln:portname2/proxy/: bar (200; 11.552232ms)
Feb  9 13:17:55.668: INFO: (14) /api/v1/namespaces/proxy-9846/services/https:proxy-service-7k8ln:tlsportname2/proxy/: tls qux (200; 11.753271ms)
Feb  9 13:17:55.668: INFO: (14) /api/v1/namespaces/proxy-9846/services/http:proxy-service-7k8ln:portname1/proxy/: foo (200; 11.728613ms)
Feb  9 13:17:55.668: INFO: (14) /api/v1/namespaces/proxy-9846/services/http:proxy-service-7k8ln:portname2/proxy/: bar (200; 12.259466ms)
Feb  9 13:17:55.679: INFO: (15) /api/v1/namespaces/proxy-9846/pods/http:proxy-service-7k8ln-4gb4m:162/proxy/: bar (200; 10.259476ms)
Feb  9 13:17:55.679: INFO: (15) /api/v1/namespaces/proxy-9846/pods/proxy-service-7k8ln-4gb4m:162/proxy/: bar (200; 10.411158ms)
Feb  9 13:17:55.679: INFO: (15) /api/v1/namespaces/proxy-9846/pods/http:proxy-service-7k8ln-4gb4m:1080/proxy/: ... (200; 10.802205ms)
Feb  9 13:17:55.679: INFO: (15) /api/v1/namespaces/proxy-9846/pods/proxy-service-7k8ln-4gb4m:1080/proxy/: test<... (200; 10.841681ms)
Feb  9 13:17:55.679: INFO: (15) /api/v1/namespaces/proxy-9846/pods/proxy-service-7k8ln-4gb4m/proxy/: test (200; 10.833532ms)
Feb  9 13:17:55.680: INFO: (15) /api/v1/namespaces/proxy-9846/pods/https:proxy-service-7k8ln-4gb4m:460/proxy/: tls baz (200; 11.084572ms)
Feb  9 13:17:55.680: INFO: (15) /api/v1/namespaces/proxy-9846/pods/https:proxy-service-7k8ln-4gb4m:462/proxy/: tls qux (200; 11.121798ms)
Feb  9 13:17:55.680: INFO: (15) /api/v1/namespaces/proxy-9846/pods/https:proxy-service-7k8ln-4gb4m:443/proxy/: test<... (200; 7.071303ms)
Feb  9 13:17:55.691: INFO: (16) /api/v1/namespaces/proxy-9846/pods/proxy-service-7k8ln-4gb4m:160/proxy/: foo (200; 8.159481ms)
Feb  9 13:17:55.692: INFO: (16) /api/v1/namespaces/proxy-9846/pods/http:proxy-service-7k8ln-4gb4m:1080/proxy/: ... (200; 8.53578ms)
Feb  9 13:17:55.692: INFO: (16) /api/v1/namespaces/proxy-9846/pods/http:proxy-service-7k8ln-4gb4m:162/proxy/: bar (200; 9.218527ms)
Feb  9 13:17:55.693: INFO: (16) /api/v1/namespaces/proxy-9846/pods/https:proxy-service-7k8ln-4gb4m:460/proxy/: tls baz (200; 9.59421ms)
Feb  9 13:17:55.693: INFO: (16) /api/v1/namespaces/proxy-9846/services/https:proxy-service-7k8ln:tlsportname2/proxy/: tls qux (200; 10.322445ms)
Feb  9 13:17:55.694: INFO: (16) /api/v1/namespaces/proxy-9846/pods/proxy-service-7k8ln-4gb4m:162/proxy/: bar (200; 10.313323ms)
Feb  9 13:17:55.693: INFO: (16) /api/v1/namespaces/proxy-9846/pods/proxy-service-7k8ln-4gb4m/proxy/: test (200; 10.34859ms)
Feb  9 13:17:55.694: INFO: (16) /api/v1/namespaces/proxy-9846/pods/https:proxy-service-7k8ln-4gb4m:443/proxy/: test<... (200; 32.878011ms)
Feb  9 13:17:55.731: INFO: (17) /api/v1/namespaces/proxy-9846/pods/http:proxy-service-7k8ln-4gb4m:160/proxy/: foo (200; 32.943368ms)
Feb  9 13:17:55.731: INFO: (17) /api/v1/namespaces/proxy-9846/pods/proxy-service-7k8ln-4gb4m/proxy/: test (200; 33.006844ms)
Feb  9 13:17:55.732: INFO: (17) /api/v1/namespaces/proxy-9846/services/https:proxy-service-7k8ln:tlsportname2/proxy/: tls qux (200; 34.478129ms)
Feb  9 13:17:55.732: INFO: (17) /api/v1/namespaces/proxy-9846/pods/https:proxy-service-7k8ln-4gb4m:462/proxy/: tls qux (200; 34.268225ms)
Feb  9 13:17:55.732: INFO: (17) /api/v1/namespaces/proxy-9846/services/http:proxy-service-7k8ln:portname1/proxy/: foo (200; 34.61052ms)
Feb  9 13:17:55.733: INFO: (17) /api/v1/namespaces/proxy-9846/services/proxy-service-7k8ln:portname1/proxy/: foo (200; 34.770734ms)
Feb  9 13:17:55.733: INFO: (17) /api/v1/namespaces/proxy-9846/services/https:proxy-service-7k8ln:tlsportname1/proxy/: tls baz (200; 34.905375ms)
Feb  9 13:17:55.733: INFO: (17) /api/v1/namespaces/proxy-9846/services/http:proxy-service-7k8ln:portname2/proxy/: bar (200; 35.042697ms)
Feb  9 13:17:55.733: INFO: (17) /api/v1/namespaces/proxy-9846/pods/proxy-service-7k8ln-4gb4m:162/proxy/: bar (200; 35.294397ms)
Feb  9 13:17:55.733: INFO: (17) /api/v1/namespaces/proxy-9846/pods/http:proxy-service-7k8ln-4gb4m:1080/proxy/: ... (200; 35.499331ms)
Feb  9 13:17:55.734: INFO: (17) /api/v1/namespaces/proxy-9846/services/proxy-service-7k8ln:portname2/proxy/: bar (200; 35.867086ms)
Feb  9 13:17:55.734: INFO: (17) /api/v1/namespaces/proxy-9846/pods/proxy-service-7k8ln-4gb4m:160/proxy/: foo (200; 36.102978ms)
Feb  9 13:17:55.740: INFO: (18) /api/v1/namespaces/proxy-9846/pods/proxy-service-7k8ln-4gb4m:160/proxy/: foo (200; 6.016643ms)
Feb  9 13:17:55.740: INFO: (18) /api/v1/namespaces/proxy-9846/pods/https:proxy-service-7k8ln-4gb4m:443/proxy/: test (200; 7.307147ms)
Feb  9 13:17:55.741: INFO: (18) /api/v1/namespaces/proxy-9846/pods/proxy-service-7k8ln-4gb4m:1080/proxy/: test<... (200; 7.153384ms)
Feb  9 13:17:55.741: INFO: (18) /api/v1/namespaces/proxy-9846/pods/https:proxy-service-7k8ln-4gb4m:460/proxy/: tls baz (200; 7.18699ms)
Feb  9 13:17:55.741: INFO: (18) /api/v1/namespaces/proxy-9846/pods/http:proxy-service-7k8ln-4gb4m:162/proxy/: bar (200; 7.147831ms)
Feb  9 13:17:55.741: INFO: (18) /api/v1/namespaces/proxy-9846/pods/http:proxy-service-7k8ln-4gb4m:1080/proxy/: ... (200; 7.528535ms)
Feb  9 13:17:55.741: INFO: (18) /api/v1/namespaces/proxy-9846/pods/http:proxy-service-7k8ln-4gb4m:160/proxy/: foo (200; 7.440928ms)
Feb  9 13:17:55.742: INFO: (18) /api/v1/namespaces/proxy-9846/pods/https:proxy-service-7k8ln-4gb4m:462/proxy/: tls qux (200; 7.570271ms)
Feb  9 13:17:55.743: INFO: (18) /api/v1/namespaces/proxy-9846/services/proxy-service-7k8ln:portname1/proxy/: foo (200; 8.325848ms)
Feb  9 13:17:55.743: INFO: (18) /api/v1/namespaces/proxy-9846/services/proxy-service-7k8ln:portname2/proxy/: bar (200; 8.454007ms)
Feb  9 13:17:55.743: INFO: (18) /api/v1/namespaces/proxy-9846/services/http:proxy-service-7k8ln:portname1/proxy/: foo (200; 8.967502ms)
Feb  9 13:17:55.743: INFO: (18) /api/v1/namespaces/proxy-9846/services/http:proxy-service-7k8ln:portname2/proxy/: bar (200; 9.153647ms)
Feb  9 13:17:55.744: INFO: (18) /api/v1/namespaces/proxy-9846/services/https:proxy-service-7k8ln:tlsportname2/proxy/: tls qux (200; 9.345466ms)
Feb  9 13:17:55.744: INFO: (18) /api/v1/namespaces/proxy-9846/services/https:proxy-service-7k8ln:tlsportname1/proxy/: tls baz (200; 9.487213ms)
Feb  9 13:17:55.750: INFO: (19) /api/v1/namespaces/proxy-9846/pods/http:proxy-service-7k8ln-4gb4m:162/proxy/: bar (200; 6.014421ms)
Feb  9 13:17:55.750: INFO: (19) /api/v1/namespaces/proxy-9846/pods/proxy-service-7k8ln-4gb4m:162/proxy/: bar (200; 6.611041ms)
Feb  9 13:17:55.750: INFO: (19) /api/v1/namespaces/proxy-9846/pods/proxy-service-7k8ln-4gb4m/proxy/: test (200; 6.706693ms)
Feb  9 13:17:55.750: INFO: (19) /api/v1/namespaces/proxy-9846/pods/proxy-service-7k8ln-4gb4m:1080/proxy/: test<... (200; 6.733439ms)
Feb  9 13:17:55.750: INFO: (19) /api/v1/namespaces/proxy-9846/pods/http:proxy-service-7k8ln-4gb4m:160/proxy/: foo (200; 6.732437ms)
Feb  9 13:17:55.751: INFO: (19) /api/v1/namespaces/proxy-9846/pods/https:proxy-service-7k8ln-4gb4m:443/proxy/: ... (200; 7.836595ms)
Feb  9 13:17:55.754: INFO: (19) /api/v1/namespaces/proxy-9846/services/http:proxy-service-7k8ln:portname2/proxy/: bar (200; 10.376918ms)
Feb  9 13:17:55.755: INFO: (19) /api/v1/namespaces/proxy-9846/services/http:proxy-service-7k8ln:portname1/proxy/: foo (200; 10.912349ms)
Feb  9 13:17:55.755: INFO: (19) /api/v1/namespaces/proxy-9846/services/proxy-service-7k8ln:portname2/proxy/: bar (200; 11.010675ms)
Feb  9 13:17:55.755: INFO: (19) /api/v1/namespaces/proxy-9846/services/proxy-service-7k8ln:portname1/proxy/: foo (200; 11.106156ms)
Feb  9 13:17:55.755: INFO: (19) /api/v1/namespaces/proxy-9846/services/https:proxy-service-7k8ln:tlsportname2/proxy/: tls qux (200; 11.042226ms)
Feb  9 13:17:55.755: INFO: (19) /api/v1/namespaces/proxy-9846/services/https:proxy-service-7k8ln:tlsportname1/proxy/: tls baz (200; 11.147403ms)
STEP: deleting ReplicationController proxy-service-7k8ln in namespace proxy-9846, will wait for the garbage collector to delete the pods
Feb  9 13:17:55.821: INFO: Deleting ReplicationController proxy-service-7k8ln took: 13.153047ms
Feb  9 13:17:56.122: INFO: Terminating ReplicationController proxy-service-7k8ln pods took: 300.655491ms
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 13:18:06.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-9846" for this suite.
Feb  9 13:18:12.666: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 13:18:12.812: INFO: namespace proxy-9846 deletion completed in 6.173069482s

• [SLOW TEST:35.772 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy through a service and a pod  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 13:18:12.812: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Feb  9 13:18:12.979: INFO: PodSpec: initContainers in spec.initContainers
Feb  9 13:19:11.975: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-2af674e3-7233-4d6b-930c-64e7d6596fa3", GenerateName:"", Namespace:"init-container-6825", SelfLink:"/api/v1/namespaces/init-container-6825/pods/pod-init-2af674e3-7233-4d6b-930c-64e7d6596fa3", UID:"6e132f69-ee43-47f4-923e-b8946fd33354", ResourceVersion:"23695551", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63716851092, loc:(*time.Location)(0x7ea48a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"979180492"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-68wxm", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0011a6a00), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-68wxm", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-68wxm", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-68wxm", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001cb0508), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-node", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0022c6900), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001cb0590)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001cb05b0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001cb05b8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001cb05bc), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716851093, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716851093, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716851093, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716851093, loc:(*time.Location)(0x7ea48a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.3.65", PodIP:"10.44.0.1", StartTime:(*v1.Time)(0xc001f9c880), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00260e310)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00260e380)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://49c7a88743623e9277cf7474eff2cd5e62b9ce5f4a06dc97ff981ffd7b931560"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001f9c960), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001f9c8c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 13:19:11.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-6825" for this suite.
Feb  9 13:19:34.022: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 13:19:34.123: INFO: namespace init-container-6825 deletion completed in 22.13300141s

• [SLOW TEST:81.310 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 13:19:34.123: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb  9 13:19:42.413: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 13:19:42.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-7944" for this suite.
Feb  9 13:19:48.501: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 13:19:48.585: INFO: namespace container-runtime-7944 deletion completed in 6.131669582s

• [SLOW TEST:14.463 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 13:19:48.586: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-projected-all-test-volume-40ed7595-0f04-46e7-93f9-0d43302ae15c
STEP: Creating secret with name secret-projected-all-test-volume-fab5e0fa-de37-4ad2-b40f-62b210a78493
STEP: Creating a pod to test Check all projections for projected volume plugin
Feb  9 13:19:48.737: INFO: Waiting up to 5m0s for pod "projected-volume-a6083e42-31f9-4350-b531-6284b7a11c05" in namespace "projected-5859" to be "success or failure"
Feb  9 13:19:48.757: INFO: Pod "projected-volume-a6083e42-31f9-4350-b531-6284b7a11c05": Phase="Pending", Reason="", readiness=false. Elapsed: 20.175732ms
Feb  9 13:19:50.785: INFO: Pod "projected-volume-a6083e42-31f9-4350-b531-6284b7a11c05": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047816314s
Feb  9 13:19:52.792: INFO: Pod "projected-volume-a6083e42-31f9-4350-b531-6284b7a11c05": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05533706s
Feb  9 13:19:54.804: INFO: Pod "projected-volume-a6083e42-31f9-4350-b531-6284b7a11c05": Phase="Pending", Reason="", readiness=false. Elapsed: 6.067232231s
Feb  9 13:19:56.811: INFO: Pod "projected-volume-a6083e42-31f9-4350-b531-6284b7a11c05": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.074277643s
STEP: Saw pod success
Feb  9 13:19:56.811: INFO: Pod "projected-volume-a6083e42-31f9-4350-b531-6284b7a11c05" satisfied condition "success or failure"
Feb  9 13:19:56.815: INFO: Trying to get logs from node iruya-node pod projected-volume-a6083e42-31f9-4350-b531-6284b7a11c05 container projected-all-volume-test: 
STEP: delete the pod
Feb  9 13:19:56.897: INFO: Waiting for pod projected-volume-a6083e42-31f9-4350-b531-6284b7a11c05 to disappear
Feb  9 13:19:56.950: INFO: Pod projected-volume-a6083e42-31f9-4350-b531-6284b7a11c05 no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 13:19:56.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5859" for this suite.
Feb  9 13:20:03.069: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 13:20:03.173: INFO: namespace projected-5859 deletion completed in 6.209544913s

• [SLOW TEST:14.587 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 13:20:03.173: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Feb  9 13:20:19.428: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb  9 13:20:19.452: INFO: Pod pod-with-poststart-http-hook still exists
Feb  9 13:20:21.453: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb  9 13:20:21.465: INFO: Pod pod-with-poststart-http-hook still exists
Feb  9 13:20:23.453: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb  9 13:20:23.462: INFO: Pod pod-with-poststart-http-hook still exists
Feb  9 13:20:25.453: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb  9 13:20:25.464: INFO: Pod pod-with-poststart-http-hook still exists
Feb  9 13:20:27.453: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb  9 13:20:27.464: INFO: Pod pod-with-poststart-http-hook still exists
Feb  9 13:20:29.453: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb  9 13:20:29.463: INFO: Pod pod-with-poststart-http-hook still exists
Feb  9 13:20:31.453: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb  9 13:20:31.464: INFO: Pod pod-with-poststart-http-hook still exists
Feb  9 13:20:33.453: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb  9 13:20:33.477: INFO: Pod pod-with-poststart-http-hook still exists
Feb  9 13:20:35.453: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb  9 13:20:35.472: INFO: Pod pod-with-poststart-http-hook still exists
Feb  9 13:20:37.458: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb  9 13:20:37.467: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 13:20:37.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-9884" for this suite.
Feb  9 13:21:07.531: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 13:21:07.676: INFO: namespace container-lifecycle-hook-9884 deletion completed in 30.200668759s

• [SLOW TEST:64.503 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 13:21:07.676: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-b81a94d9-dffb-4332-b814-ae8aaa92a13b
STEP: Creating a pod to test consume configMaps
Feb  9 13:21:07.814: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-706e5747-d9c5-4efd-ad86-69791ae545bb" in namespace "projected-7545" to be "success or failure"
Feb  9 13:21:07.833: INFO: Pod "pod-projected-configmaps-706e5747-d9c5-4efd-ad86-69791ae545bb": Phase="Pending", Reason="", readiness=false. Elapsed: 18.539189ms
Feb  9 13:21:09.863: INFO: Pod "pod-projected-configmaps-706e5747-d9c5-4efd-ad86-69791ae545bb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048428699s
Feb  9 13:21:11.874: INFO: Pod "pod-projected-configmaps-706e5747-d9c5-4efd-ad86-69791ae545bb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059752899s
Feb  9 13:21:13.889: INFO: Pod "pod-projected-configmaps-706e5747-d9c5-4efd-ad86-69791ae545bb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.074888653s
Feb  9 13:21:15.897: INFO: Pod "pod-projected-configmaps-706e5747-d9c5-4efd-ad86-69791ae545bb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.08265156s
Feb  9 13:21:17.917: INFO: Pod "pod-projected-configmaps-706e5747-d9c5-4efd-ad86-69791ae545bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.102574337s
STEP: Saw pod success
Feb  9 13:21:17.917: INFO: Pod "pod-projected-configmaps-706e5747-d9c5-4efd-ad86-69791ae545bb" satisfied condition "success or failure"
Feb  9 13:21:17.925: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-706e5747-d9c5-4efd-ad86-69791ae545bb container projected-configmap-volume-test: 
STEP: delete the pod
Feb  9 13:21:18.168: INFO: Waiting for pod pod-projected-configmaps-706e5747-d9c5-4efd-ad86-69791ae545bb to disappear
Feb  9 13:21:18.177: INFO: Pod pod-projected-configmaps-706e5747-d9c5-4efd-ad86-69791ae545bb no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 13:21:18.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7545" for this suite.
Feb  9 13:21:24.221: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 13:21:24.332: INFO: namespace projected-7545 deletion completed in 6.144145546s

• [SLOW TEST:16.656 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 13:21:24.333: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  9 13:21:24.408: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-488'
Feb  9 13:21:26.296: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb  9 13:21:26.297: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
Feb  9 13:21:26.328: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Feb  9 13:21:26.362: INFO: scanned /root for discovery docs: 
Feb  9 13:21:26.362: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-488'
Feb  9 13:21:50.619: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Feb  9 13:21:50.619: INFO: stdout: "Created e2e-test-nginx-rc-21fceba18ded5e5f41db1f4f3eed588e\nScaling up e2e-test-nginx-rc-21fceba18ded5e5f41db1f4f3eed588e from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-21fceba18ded5e5f41db1f4f3eed588e up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-21fceba18ded5e5f41db1f4f3eed588e to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Feb  9 13:21:50.619: INFO: stdout: "Created e2e-test-nginx-rc-21fceba18ded5e5f41db1f4f3eed588e\nScaling up e2e-test-nginx-rc-21fceba18ded5e5f41db1f4f3eed588e from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-21fceba18ded5e5f41db1f4f3eed588e up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-21fceba18ded5e5f41db1f4f3eed588e to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Feb  9 13:21:50.619: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-488'
Feb  9 13:21:50.766: INFO: stderr: ""
Feb  9 13:21:50.766: INFO: stdout: "e2e-test-nginx-rc-21fceba18ded5e5f41db1f4f3eed588e-msqx4 e2e-test-nginx-rc-q7b78 "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Feb  9 13:21:55.767: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-488'
Feb  9 13:21:55.983: INFO: stderr: ""
Feb  9 13:21:55.983: INFO: stdout: "e2e-test-nginx-rc-21fceba18ded5e5f41db1f4f3eed588e-msqx4 e2e-test-nginx-rc-q7b78 "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Feb  9 13:22:00.984: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-488'
Feb  9 13:22:01.169: INFO: stderr: ""
Feb  9 13:22:01.169: INFO: stdout: "e2e-test-nginx-rc-21fceba18ded5e5f41db1f4f3eed588e-msqx4 "
Feb  9 13:22:01.169: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-21fceba18ded5e5f41db1f4f3eed588e-msqx4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-488'
Feb  9 13:22:01.304: INFO: stderr: ""
Feb  9 13:22:01.304: INFO: stdout: "true"
Feb  9 13:22:01.305: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-21fceba18ded5e5f41db1f4f3eed588e-msqx4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-488'
Feb  9 13:22:01.420: INFO: stderr: ""
Feb  9 13:22:01.420: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Feb  9 13:22:01.420: INFO: e2e-test-nginx-rc-21fceba18ded5e5f41db1f4f3eed588e-msqx4 is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522
Feb  9 13:22:01.420: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-488'
Feb  9 13:22:01.543: INFO: stderr: ""
Feb  9 13:22:01.543: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 13:22:01.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-488" for this suite.
Feb  9 13:22:23.610: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 13:22:23.739: INFO: namespace kubectl-488 deletion completed in 22.164841413s

• [SLOW TEST:59.407 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 13:22:23.741: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb  9 13:22:32.397: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 13:22:32.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-6710" for this suite.
Feb  9 13:22:38.563: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 13:22:38.784: INFO: namespace container-runtime-6710 deletion completed in 6.274514468s

• [SLOW TEST:15.044 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 13:22:38.785: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0209 13:22:50.673619       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb  9 13:22:50.673: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 13:22:50.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-8560" for this suite.
Feb  9 13:22:58.897: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 13:22:59.079: INFO: namespace gc-8560 deletion completed in 8.322393798s

• [SLOW TEST:20.294 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 13:22:59.080: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 13:23:35.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-4038" for this suite.
Feb  9 13:23:41.669: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 13:23:41.834: INFO: namespace namespaces-4038 deletion completed in 6.194168014s
STEP: Destroying namespace "nsdeletetest-6471" for this suite.
Feb  9 13:23:41.837: INFO: Namespace nsdeletetest-6471 was already deleted
STEP: Destroying namespace "nsdeletetest-426" for this suite.
Feb  9 13:23:47.875: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 13:23:47.985: INFO: namespace nsdeletetest-426 deletion completed in 6.148924953s

• [SLOW TEST:48.906 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 13:23:47.988: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's args
Feb  9 13:23:48.201: INFO: Waiting up to 5m0s for pod "var-expansion-5dfc1907-c25e-4bc1-9002-a705cd87c920" in namespace "var-expansion-4928" to be "success or failure"
Feb  9 13:23:48.341: INFO: Pod "var-expansion-5dfc1907-c25e-4bc1-9002-a705cd87c920": Phase="Pending", Reason="", readiness=false. Elapsed: 139.7695ms
Feb  9 13:23:50.378: INFO: Pod "var-expansion-5dfc1907-c25e-4bc1-9002-a705cd87c920": Phase="Pending", Reason="", readiness=false. Elapsed: 2.17641618s
Feb  9 13:23:52.386: INFO: Pod "var-expansion-5dfc1907-c25e-4bc1-9002-a705cd87c920": Phase="Pending", Reason="", readiness=false. Elapsed: 4.184443248s
Feb  9 13:23:54.399: INFO: Pod "var-expansion-5dfc1907-c25e-4bc1-9002-a705cd87c920": Phase="Pending", Reason="", readiness=false. Elapsed: 6.19792982s
Feb  9 13:23:56.406: INFO: Pod "var-expansion-5dfc1907-c25e-4bc1-9002-a705cd87c920": Phase="Pending", Reason="", readiness=false. Elapsed: 8.204554613s
Feb  9 13:23:58.418: INFO: Pod "var-expansion-5dfc1907-c25e-4bc1-9002-a705cd87c920": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.216909608s
STEP: Saw pod success
Feb  9 13:23:58.419: INFO: Pod "var-expansion-5dfc1907-c25e-4bc1-9002-a705cd87c920" satisfied condition "success or failure"
Feb  9 13:23:58.425: INFO: Trying to get logs from node iruya-node pod var-expansion-5dfc1907-c25e-4bc1-9002-a705cd87c920 container dapi-container: 
STEP: delete the pod
Feb  9 13:23:58.531: INFO: Waiting for pod var-expansion-5dfc1907-c25e-4bc1-9002-a705cd87c920 to disappear
Feb  9 13:23:58.539: INFO: Pod var-expansion-5dfc1907-c25e-4bc1-9002-a705cd87c920 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 13:23:58.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-4928" for this suite.
Feb  9 13:24:04.601: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 13:24:04.711: INFO: namespace var-expansion-4928 deletion completed in 6.163124698s

• [SLOW TEST:16.724 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 13:24:04.712: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-5fc2bb7c-f4b6-46ad-ae90-892bb04ab16a
STEP: Creating configMap with name cm-test-opt-upd-fc59f25e-c4af-4755-83a1-1146dc95a6ca
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-5fc2bb7c-f4b6-46ad-ae90-892bb04ab16a
STEP: Updating configmap cm-test-opt-upd-fc59f25e-c4af-4755-83a1-1146dc95a6ca
STEP: Creating configMap with name cm-test-opt-create-430806b2-d431-4139-bf8a-9220e6e23d1b
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 13:24:19.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9767" for this suite.
Feb  9 13:24:41.148: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 13:24:41.285: INFO: namespace configmap-9767 deletion completed in 22.158988681s

• [SLOW TEST:36.573 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 13:24:41.286: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service multi-endpoint-test in namespace services-1750
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1750 to expose endpoints map[]
Feb  9 13:24:41.518: INFO: Get endpoints failed (15.110008ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Feb  9 13:24:42.535: INFO: successfully validated that service multi-endpoint-test in namespace services-1750 exposes endpoints map[] (1.032592592s elapsed)
STEP: Creating pod pod1 in namespace services-1750
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1750 to expose endpoints map[pod1:[100]]
Feb  9 13:24:46.745: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.190236123s elapsed, will retry)
Feb  9 13:24:51.815: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (9.259950156s elapsed, will retry)
Feb  9 13:24:52.824: INFO: successfully validated that service multi-endpoint-test in namespace services-1750 exposes endpoints map[pod1:[100]] (10.269627411s elapsed)
STEP: Creating pod pod2 in namespace services-1750
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1750 to expose endpoints map[pod1:[100] pod2:[101]]
Feb  9 13:24:57.757: INFO: Unexpected endpoints: found map[ea539007-e47a-496d-848f-4ec62c0eddc4:[100]], expected map[pod1:[100] pod2:[101]] (4.917130646s elapsed, will retry)
Feb  9 13:24:59.820: INFO: successfully validated that service multi-endpoint-test in namespace services-1750 exposes endpoints map[pod1:[100] pod2:[101]] (6.979904942s elapsed)
STEP: Deleting pod pod1 in namespace services-1750
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1750 to expose endpoints map[pod2:[101]]
Feb  9 13:24:59.908: INFO: successfully validated that service multi-endpoint-test in namespace services-1750 exposes endpoints map[pod2:[101]] (75.584646ms elapsed)
STEP: Deleting pod pod2 in namespace services-1750
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1750 to expose endpoints map[]
Feb  9 13:25:00.948: INFO: successfully validated that service multi-endpoint-test in namespace services-1750 exposes endpoints map[] (1.027878447s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 13:25:01.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-1750" for this suite.
Feb  9 13:25:23.121: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 13:25:23.243: INFO: namespace services-1750 deletion completed in 22.148340061s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:41.957 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 13:25:23.243: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  9 13:25:23.370: INFO: Create a RollingUpdate DaemonSet
Feb  9 13:25:23.377: INFO: Check that daemon pods launch on every node of the cluster
Feb  9 13:25:23.393: INFO: Number of nodes with available pods: 0
Feb  9 13:25:23.393: INFO: Node iruya-node is running more than one daemon pod
Feb  9 13:25:24.408: INFO: Number of nodes with available pods: 0
Feb  9 13:25:24.408: INFO: Node iruya-node is running more than one daemon pod
Feb  9 13:25:25.407: INFO: Number of nodes with available pods: 0
Feb  9 13:25:25.407: INFO: Node iruya-node is running more than one daemon pod
Feb  9 13:25:26.412: INFO: Number of nodes with available pods: 0
Feb  9 13:25:26.412: INFO: Node iruya-node is running more than one daemon pod
Feb  9 13:25:27.407: INFO: Number of nodes with available pods: 0
Feb  9 13:25:27.407: INFO: Node iruya-node is running more than one daemon pod
Feb  9 13:25:28.411: INFO: Number of nodes with available pods: 0
Feb  9 13:25:28.411: INFO: Node iruya-node is running more than one daemon pod
Feb  9 13:25:29.737: INFO: Number of nodes with available pods: 0
Feb  9 13:25:29.737: INFO: Node iruya-node is running more than one daemon pod
Feb  9 13:25:30.409: INFO: Number of nodes with available pods: 0
Feb  9 13:25:30.409: INFO: Node iruya-node is running more than one daemon pod
Feb  9 13:25:31.409: INFO: Number of nodes with available pods: 0
Feb  9 13:25:31.409: INFO: Node iruya-node is running more than one daemon pod
Feb  9 13:25:32.426: INFO: Number of nodes with available pods: 1
Feb  9 13:25:32.426: INFO: Node iruya-node is running more than one daemon pod
Feb  9 13:25:33.406: INFO: Number of nodes with available pods: 1
Feb  9 13:25:33.406: INFO: Node iruya-node is running more than one daemon pod
Feb  9 13:25:34.411: INFO: Number of nodes with available pods: 2
Feb  9 13:25:34.411: INFO: Number of running nodes: 2, number of available pods: 2
Feb  9 13:25:34.411: INFO: Update the DaemonSet to trigger a rollout
Feb  9 13:25:34.426: INFO: Updating DaemonSet daemon-set
Feb  9 13:25:47.487: INFO: Roll back the DaemonSet before rollout is complete
Feb  9 13:25:47.499: INFO: Updating DaemonSet daemon-set
Feb  9 13:25:47.499: INFO: Make sure DaemonSet rollback is complete
Feb  9 13:25:47.529: INFO: Wrong image for pod: daemon-set-bktxz. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Feb  9 13:25:47.529: INFO: Pod daemon-set-bktxz is not available
Feb  9 13:25:48.601: INFO: Wrong image for pod: daemon-set-bktxz. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Feb  9 13:25:48.601: INFO: Pod daemon-set-bktxz is not available
Feb  9 13:25:49.597: INFO: Wrong image for pod: daemon-set-bktxz. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Feb  9 13:25:49.597: INFO: Pod daemon-set-bktxz is not available
Feb  9 13:25:50.603: INFO: Wrong image for pod: daemon-set-bktxz. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Feb  9 13:25:50.603: INFO: Pod daemon-set-bktxz is not available
Feb  9 13:25:51.600: INFO: Wrong image for pod: daemon-set-bktxz. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Feb  9 13:25:51.600: INFO: Pod daemon-set-bktxz is not available
Feb  9 13:25:52.603: INFO: Pod daemon-set-wrs8f is not available
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2745, will wait for the garbage collector to delete the pods
Feb  9 13:25:52.687: INFO: Deleting DaemonSet.extensions daemon-set took: 13.125441ms
Feb  9 13:25:52.988: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.889078ms
Feb  9 13:25:58.808: INFO: Number of nodes with available pods: 0
Feb  9 13:25:58.808: INFO: Number of running nodes: 0, number of available pods: 0
Feb  9 13:25:58.815: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2745/daemonsets","resourceVersion":"23696687"},"items":null}

Feb  9 13:25:58.820: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2745/pods","resourceVersion":"23696687"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 13:25:58.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-2745" for this suite.
Feb  9 13:26:04.885: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 13:26:05.041: INFO: namespace daemonsets-2745 deletion completed in 6.193968645s

• [SLOW TEST:41.798 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 13:26:05.042: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  9 13:26:05.164: INFO: Waiting up to 5m0s for pod "downwardapi-volume-08ce81ee-0fba-4d8d-acae-6cffa9458d1f" in namespace "projected-7880" to be "success or failure"
Feb  9 13:26:05.170: INFO: Pod "downwardapi-volume-08ce81ee-0fba-4d8d-acae-6cffa9458d1f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.318169ms
Feb  9 13:26:07.179: INFO: Pod "downwardapi-volume-08ce81ee-0fba-4d8d-acae-6cffa9458d1f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015404618s
Feb  9 13:26:09.191: INFO: Pod "downwardapi-volume-08ce81ee-0fba-4d8d-acae-6cffa9458d1f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027363969s
Feb  9 13:26:11.201: INFO: Pod "downwardapi-volume-08ce81ee-0fba-4d8d-acae-6cffa9458d1f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037644245s
Feb  9 13:26:13.212: INFO: Pod "downwardapi-volume-08ce81ee-0fba-4d8d-acae-6cffa9458d1f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.047818801s
Feb  9 13:26:15.238: INFO: Pod "downwardapi-volume-08ce81ee-0fba-4d8d-acae-6cffa9458d1f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.073801325s
STEP: Saw pod success
Feb  9 13:26:15.238: INFO: Pod "downwardapi-volume-08ce81ee-0fba-4d8d-acae-6cffa9458d1f" satisfied condition "success or failure"
Feb  9 13:26:15.278: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-08ce81ee-0fba-4d8d-acae-6cffa9458d1f container client-container: 
STEP: delete the pod
Feb  9 13:26:15.354: INFO: Waiting for pod downwardapi-volume-08ce81ee-0fba-4d8d-acae-6cffa9458d1f to disappear
Feb  9 13:26:15.361: INFO: Pod downwardapi-volume-08ce81ee-0fba-4d8d-acae-6cffa9458d1f no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 13:26:15.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7880" for this suite.
Feb  9 13:26:21.462: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 13:26:21.610: INFO: namespace projected-7880 deletion completed in 6.241556234s

• [SLOW TEST:16.568 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 13:26:21.611: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test hostPath mode
Feb  9 13:26:21.837: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-6497" to be "success or failure"
Feb  9 13:26:21.885: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 47.450216ms
Feb  9 13:26:23.898: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060466432s
Feb  9 13:26:25.905: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067319214s
Feb  9 13:26:28.043: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.205514808s
Feb  9 13:26:30.052: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.215064921s
Feb  9 13:26:32.068: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.230468592s
Feb  9 13:26:34.087: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.24951501s
STEP: Saw pod success
Feb  9 13:26:34.087: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Feb  9 13:26:34.091: INFO: Trying to get logs from node iruya-node pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Feb  9 13:26:34.245: INFO: Waiting for pod pod-host-path-test to disappear
Feb  9 13:26:34.253: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 13:26:34.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-6497" for this suite.
Feb  9 13:26:40.286: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 13:26:40.463: INFO: namespace hostpath-6497 deletion completed in 6.203559627s

• [SLOW TEST:18.852 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 13:26:40.463: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  9 13:26:40.676: INFO: Waiting up to 5m0s for pod "downwardapi-volume-17b24925-6cc4-471a-adfa-800934ac8356" in namespace "projected-2162" to be "success or failure"
Feb  9 13:26:40.685: INFO: Pod "downwardapi-volume-17b24925-6cc4-471a-adfa-800934ac8356": Phase="Pending", Reason="", readiness=false. Elapsed: 8.339872ms
Feb  9 13:26:42.695: INFO: Pod "downwardapi-volume-17b24925-6cc4-471a-adfa-800934ac8356": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018808685s
Feb  9 13:26:44.704: INFO: Pod "downwardapi-volume-17b24925-6cc4-471a-adfa-800934ac8356": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027904148s
Feb  9 13:26:46.715: INFO: Pod "downwardapi-volume-17b24925-6cc4-471a-adfa-800934ac8356": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03902472s
Feb  9 13:26:48.725: INFO: Pod "downwardapi-volume-17b24925-6cc4-471a-adfa-800934ac8356": Phase="Pending", Reason="", readiness=false. Elapsed: 8.048673964s
Feb  9 13:26:50.741: INFO: Pod "downwardapi-volume-17b24925-6cc4-471a-adfa-800934ac8356": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.065057449s
STEP: Saw pod success
Feb  9 13:26:50.742: INFO: Pod "downwardapi-volume-17b24925-6cc4-471a-adfa-800934ac8356" satisfied condition "success or failure"
Feb  9 13:26:50.745: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-17b24925-6cc4-471a-adfa-800934ac8356 container client-container: 
STEP: delete the pod
Feb  9 13:26:50.825: INFO: Waiting for pod downwardapi-volume-17b24925-6cc4-471a-adfa-800934ac8356 to disappear
Feb  9 13:26:50.832: INFO: Pod downwardapi-volume-17b24925-6cc4-471a-adfa-800934ac8356 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 13:26:50.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2162" for this suite.
Feb  9 13:26:56.878: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 13:26:56.981: INFO: namespace projected-2162 deletion completed in 6.143388201s

• [SLOW TEST:16.518 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 13:26:56.982: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  9 13:26:57.131: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 13:27:05.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3404" for this suite.
Feb  9 13:28:07.276: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 13:28:07.432: INFO: namespace pods-3404 deletion completed in 1m2.206988343s

• [SLOW TEST:70.450 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 13:28:07.432: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  9 13:28:07.523: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
Feb  9 13:28:09.840: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 13:28:11.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-4404" for this suite.
Feb  9 13:28:21.846: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 13:28:22.780: INFO: namespace replication-controller-4404 deletion completed in 11.198488108s

• [SLOW TEST:15.348 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 13:28:22.781: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  9 13:28:23.483: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-7452'
Feb  9 13:28:23.714: INFO: stderr: ""
Feb  9 13:28:23.714: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Feb  9 13:28:33.767: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-7452 -o json'
Feb  9 13:28:33.901: INFO: stderr: ""
Feb  9 13:28:33.901: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-02-09T13:28:23Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"kubectl-7452\",\n        \"resourceVersion\": \"23697077\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-7452/pods/e2e-test-nginx-pod\",\n        \"uid\": \"b91f4073-a19b-4041-8848-ebb217e0dfbf\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-g5zm2\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"iruya-node\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-g5zm2\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-g5zm2\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-09T13:28:23Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-09T13:28:31Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-09T13:28:31Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-09T13:28:23Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://1d01a55ead1f402f5777855d603ceae40e9ce56f650b2e5d8dcbd4d7558ee31f\",\n                \"image\": \"nginx:1.14-alpine\",\n                \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-02-09T13:28:30Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.3.65\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.44.0.1\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-02-09T13:28:23Z\"\n    }\n}\n"
STEP: replace the image in the pod
Feb  9 13:28:33.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-7452'
Feb  9 13:28:34.294: INFO: stderr: ""
Feb  9 13:28:34.294: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726
Feb  9 13:28:34.303: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-7452'
Feb  9 13:28:40.760: INFO: stderr: ""
Feb  9 13:28:40.760: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 13:28:40.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7452" for this suite.
Feb  9 13:28:46.800: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 13:28:46.988: INFO: namespace kubectl-7452 deletion completed in 6.213056391s

• [SLOW TEST:24.207 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 13:28:46.989: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-542aed6a-593c-453f-98d0-5476e9822d28
STEP: Creating a pod to test consume configMaps
Feb  9 13:28:47.096: INFO: Waiting up to 5m0s for pod "pod-configmaps-20602e1f-fdd6-4b0c-a0cb-f8c5bd49ac0f" in namespace "configmap-7688" to be "success or failure"
Feb  9 13:28:47.164: INFO: Pod "pod-configmaps-20602e1f-fdd6-4b0c-a0cb-f8c5bd49ac0f": Phase="Pending", Reason="", readiness=false. Elapsed: 67.418697ms
Feb  9 13:28:49.171: INFO: Pod "pod-configmaps-20602e1f-fdd6-4b0c-a0cb-f8c5bd49ac0f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07494106s
Feb  9 13:28:51.178: INFO: Pod "pod-configmaps-20602e1f-fdd6-4b0c-a0cb-f8c5bd49ac0f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.081576045s
Feb  9 13:28:53.190: INFO: Pod "pod-configmaps-20602e1f-fdd6-4b0c-a0cb-f8c5bd49ac0f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.093419244s
Feb  9 13:28:55.205: INFO: Pod "pod-configmaps-20602e1f-fdd6-4b0c-a0cb-f8c5bd49ac0f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.108555118s
Feb  9 13:28:57.241: INFO: Pod "pod-configmaps-20602e1f-fdd6-4b0c-a0cb-f8c5bd49ac0f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.145030349s
STEP: Saw pod success
Feb  9 13:28:57.241: INFO: Pod "pod-configmaps-20602e1f-fdd6-4b0c-a0cb-f8c5bd49ac0f" satisfied condition "success or failure"
Feb  9 13:28:57.247: INFO: Trying to get logs from node iruya-node pod pod-configmaps-20602e1f-fdd6-4b0c-a0cb-f8c5bd49ac0f container configmap-volume-test: 
STEP: delete the pod
Feb  9 13:28:57.326: INFO: Waiting for pod pod-configmaps-20602e1f-fdd6-4b0c-a0cb-f8c5bd49ac0f to disappear
Feb  9 13:28:57.335: INFO: Pod pod-configmaps-20602e1f-fdd6-4b0c-a0cb-f8c5bd49ac0f no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 13:28:57.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7688" for this suite.
Feb  9 13:29:03.417: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 13:29:03.509: INFO: namespace configmap-7688 deletion completed in 6.127405857s

• [SLOW TEST:16.520 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 13:29:03.510: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override command
Feb  9 13:29:03.603: INFO: Waiting up to 5m0s for pod "client-containers-1f2f89d6-d149-4c4a-8b75-f0a15a1ebdc4" in namespace "containers-2648" to be "success or failure"
Feb  9 13:29:03.673: INFO: Pod "client-containers-1f2f89d6-d149-4c4a-8b75-f0a15a1ebdc4": Phase="Pending", Reason="", readiness=false. Elapsed: 69.046363ms
Feb  9 13:29:05.682: INFO: Pod "client-containers-1f2f89d6-d149-4c4a-8b75-f0a15a1ebdc4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078433872s
Feb  9 13:29:07.689: INFO: Pod "client-containers-1f2f89d6-d149-4c4a-8b75-f0a15a1ebdc4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.085301799s
Feb  9 13:29:09.697: INFO: Pod "client-containers-1f2f89d6-d149-4c4a-8b75-f0a15a1ebdc4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.092886117s
Feb  9 13:29:11.715: INFO: Pod "client-containers-1f2f89d6-d149-4c4a-8b75-f0a15a1ebdc4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.111195468s
Feb  9 13:29:13.723: INFO: Pod "client-containers-1f2f89d6-d149-4c4a-8b75-f0a15a1ebdc4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.119298633s
STEP: Saw pod success
Feb  9 13:29:13.723: INFO: Pod "client-containers-1f2f89d6-d149-4c4a-8b75-f0a15a1ebdc4" satisfied condition "success or failure"
Feb  9 13:29:13.727: INFO: Trying to get logs from node iruya-node pod client-containers-1f2f89d6-d149-4c4a-8b75-f0a15a1ebdc4 container test-container: 
STEP: delete the pod
Feb  9 13:29:13.911: INFO: Waiting for pod client-containers-1f2f89d6-d149-4c4a-8b75-f0a15a1ebdc4 to disappear
Feb  9 13:29:13.925: INFO: Pod client-containers-1f2f89d6-d149-4c4a-8b75-f0a15a1ebdc4 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 13:29:13.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-2648" for this suite.
Feb  9 13:29:19.957: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 13:29:20.069: INFO: namespace containers-2648 deletion completed in 6.136227741s

• [SLOW TEST:16.560 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 13:29:20.070: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-19461d54-4f94-4d6c-bab0-0f799a3499d9
STEP: Creating a pod to test consume secrets
Feb  9 13:29:20.199: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-38ffd43c-1376-4aba-a54a-64a4b5f1626a" in namespace "projected-3921" to be "success or failure"
Feb  9 13:29:20.224: INFO: Pod "pod-projected-secrets-38ffd43c-1376-4aba-a54a-64a4b5f1626a": Phase="Pending", Reason="", readiness=false. Elapsed: 24.3748ms
Feb  9 13:29:22.234: INFO: Pod "pod-projected-secrets-38ffd43c-1376-4aba-a54a-64a4b5f1626a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03507322s
Feb  9 13:29:24.248: INFO: Pod "pod-projected-secrets-38ffd43c-1376-4aba-a54a-64a4b5f1626a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048923353s
Feb  9 13:29:26.254: INFO: Pod "pod-projected-secrets-38ffd43c-1376-4aba-a54a-64a4b5f1626a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054925242s
Feb  9 13:29:28.263: INFO: Pod "pod-projected-secrets-38ffd43c-1376-4aba-a54a-64a4b5f1626a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.064066647s
STEP: Saw pod success
Feb  9 13:29:28.263: INFO: Pod "pod-projected-secrets-38ffd43c-1376-4aba-a54a-64a4b5f1626a" satisfied condition "success or failure"
Feb  9 13:29:28.270: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-38ffd43c-1376-4aba-a54a-64a4b5f1626a container projected-secret-volume-test: 
STEP: delete the pod
Feb  9 13:29:28.355: INFO: Waiting for pod pod-projected-secrets-38ffd43c-1376-4aba-a54a-64a4b5f1626a to disappear
Feb  9 13:29:28.380: INFO: Pod pod-projected-secrets-38ffd43c-1376-4aba-a54a-64a4b5f1626a no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 13:29:28.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3921" for this suite.
Feb  9 13:29:34.405: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 13:29:34.516: INFO: namespace projected-3921 deletion completed in 6.129564582s

• [SLOW TEST:14.447 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 13:29:34.517: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Feb  9 13:29:34.671: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-3337,SelfLink:/api/v1/namespaces/watch-3337/configmaps/e2e-watch-test-resource-version,UID:8eca7d5a-6d4c-4fc1-9b1f-596b40e5ee1e,ResourceVersion:23697263,Generation:0,CreationTimestamp:2020-02-09 13:29:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb  9 13:29:34.671: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-3337,SelfLink:/api/v1/namespaces/watch-3337/configmaps/e2e-watch-test-resource-version,UID:8eca7d5a-6d4c-4fc1-9b1f-596b40e5ee1e,ResourceVersion:23697264,Generation:0,CreationTimestamp:2020-02-09 13:29:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 13:29:34.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-3337" for this suite.
Feb  9 13:29:40.700: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 13:29:40.822: INFO: namespace watch-3337 deletion completed in 6.145686908s

• [SLOW TEST:6.306 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 13:29:40.823: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Feb  9 13:29:40.890: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3573'
Feb  9 13:29:41.166: INFO: stderr: ""
Feb  9 13:29:41.166: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb  9 13:29:41.166: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3573'
Feb  9 13:29:41.318: INFO: stderr: ""
Feb  9 13:29:41.318: INFO: stdout: "update-demo-nautilus-795gj update-demo-nautilus-dfc6w "
Feb  9 13:29:41.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-795gj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3573'
Feb  9 13:29:41.436: INFO: stderr: ""
Feb  9 13:29:41.436: INFO: stdout: ""
Feb  9 13:29:41.436: INFO: update-demo-nautilus-795gj is created but not running
Feb  9 13:29:46.437: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3573'
Feb  9 13:29:46.550: INFO: stderr: ""
Feb  9 13:29:46.550: INFO: stdout: "update-demo-nautilus-795gj update-demo-nautilus-dfc6w "
Feb  9 13:29:46.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-795gj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3573'
Feb  9 13:29:46.642: INFO: stderr: ""
Feb  9 13:29:46.642: INFO: stdout: ""
Feb  9 13:29:46.642: INFO: update-demo-nautilus-795gj is created but not running
Feb  9 13:29:51.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3573'
Feb  9 13:29:51.731: INFO: stderr: ""
Feb  9 13:29:51.732: INFO: stdout: "update-demo-nautilus-795gj update-demo-nautilus-dfc6w "
Feb  9 13:29:51.732: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-795gj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3573'
Feb  9 13:29:51.865: INFO: stderr: ""
Feb  9 13:29:51.865: INFO: stdout: ""
Feb  9 13:29:51.865: INFO: update-demo-nautilus-795gj is created but not running
Feb  9 13:29:56.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3573'
Feb  9 13:29:57.004: INFO: stderr: ""
Feb  9 13:29:57.005: INFO: stdout: "update-demo-nautilus-795gj update-demo-nautilus-dfc6w "
Feb  9 13:29:57.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-795gj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3573'
Feb  9 13:29:57.096: INFO: stderr: ""
Feb  9 13:29:57.096: INFO: stdout: "true"
Feb  9 13:29:57.096: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-795gj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3573'
Feb  9 13:29:57.191: INFO: stderr: ""
Feb  9 13:29:57.191: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  9 13:29:57.191: INFO: validating pod update-demo-nautilus-795gj
Feb  9 13:29:57.228: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  9 13:29:57.228: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  9 13:29:57.228: INFO: update-demo-nautilus-795gj is verified up and running
Feb  9 13:29:57.228: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dfc6w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3573'
Feb  9 13:29:57.316: INFO: stderr: ""
Feb  9 13:29:57.316: INFO: stdout: "true"
Feb  9 13:29:57.316: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dfc6w -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3573'
Feb  9 13:29:57.432: INFO: stderr: ""
Feb  9 13:29:57.433: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  9 13:29:57.433: INFO: validating pod update-demo-nautilus-dfc6w
Feb  9 13:29:57.442: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  9 13:29:57.442: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  9 13:29:57.442: INFO: update-demo-nautilus-dfc6w is verified up and running
STEP: scaling down the replication controller
Feb  9 13:29:57.445: INFO: scanned /root for discovery docs: 
Feb  9 13:29:57.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-3573'
Feb  9 13:29:58.581: INFO: stderr: ""
Feb  9 13:29:58.581: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb  9 13:29:58.581: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3573'
Feb  9 13:29:58.712: INFO: stderr: ""
Feb  9 13:29:58.712: INFO: stdout: "update-demo-nautilus-795gj update-demo-nautilus-dfc6w "
STEP: Replicas for name=update-demo: expected=1 actual=2
Feb  9 13:30:03.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3573'
Feb  9 13:30:03.871: INFO: stderr: ""
Feb  9 13:30:03.871: INFO: stdout: "update-demo-nautilus-795gj "
Feb  9 13:30:03.872: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-795gj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3573'
Feb  9 13:30:03.994: INFO: stderr: ""
Feb  9 13:30:03.995: INFO: stdout: "true"
Feb  9 13:30:03.995: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-795gj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3573'
Feb  9 13:30:04.089: INFO: stderr: ""
Feb  9 13:30:04.089: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  9 13:30:04.089: INFO: validating pod update-demo-nautilus-795gj
Feb  9 13:30:04.105: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  9 13:30:04.105: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  9 13:30:04.105: INFO: update-demo-nautilus-795gj is verified up and running
STEP: scaling up the replication controller
Feb  9 13:30:04.111: INFO: scanned /root for discovery docs: 
Feb  9 13:30:04.111: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-3573'
Feb  9 13:30:05.320: INFO: stderr: ""
Feb  9 13:30:05.320: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb  9 13:30:05.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3573'
Feb  9 13:30:05.576: INFO: stderr: ""
Feb  9 13:30:05.576: INFO: stdout: "update-demo-nautilus-795gj update-demo-nautilus-7wjtt "
Feb  9 13:30:05.576: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-795gj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3573'
Feb  9 13:30:05.660: INFO: stderr: ""
Feb  9 13:30:05.660: INFO: stdout: "true"
Feb  9 13:30:05.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-795gj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3573'
Feb  9 13:30:05.975: INFO: stderr: ""
Feb  9 13:30:05.975: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  9 13:30:05.975: INFO: validating pod update-demo-nautilus-795gj
Feb  9 13:30:05.986: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  9 13:30:05.987: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  9 13:30:05.987: INFO: update-demo-nautilus-795gj is verified up and running
Feb  9 13:30:05.987: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7wjtt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3573'
Feb  9 13:30:06.074: INFO: stderr: ""
Feb  9 13:30:06.074: INFO: stdout: ""
Feb  9 13:30:06.074: INFO: update-demo-nautilus-7wjtt is created but not running
Feb  9 13:30:11.075: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3573'
Feb  9 13:30:11.235: INFO: stderr: ""
Feb  9 13:30:11.235: INFO: stdout: "update-demo-nautilus-795gj update-demo-nautilus-7wjtt "
Feb  9 13:30:11.236: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-795gj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3573'
Feb  9 13:30:11.409: INFO: stderr: ""
Feb  9 13:30:11.410: INFO: stdout: "true"
Feb  9 13:30:11.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-795gj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3573'
Feb  9 13:30:11.497: INFO: stderr: ""
Feb  9 13:30:11.497: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  9 13:30:11.497: INFO: validating pod update-demo-nautilus-795gj
Feb  9 13:30:11.504: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  9 13:30:11.504: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  9 13:30:11.504: INFO: update-demo-nautilus-795gj is verified up and running
Feb  9 13:30:11.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7wjtt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3573'
Feb  9 13:30:11.605: INFO: stderr: ""
Feb  9 13:30:11.605: INFO: stdout: ""
Feb  9 13:30:11.605: INFO: update-demo-nautilus-7wjtt is created but not running
Feb  9 13:30:16.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3573'
Feb  9 13:30:16.743: INFO: stderr: ""
Feb  9 13:30:16.743: INFO: stdout: "update-demo-nautilus-795gj update-demo-nautilus-7wjtt "
Feb  9 13:30:16.744: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-795gj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3573'
Feb  9 13:30:16.889: INFO: stderr: ""
Feb  9 13:30:16.890: INFO: stdout: "true"
Feb  9 13:30:16.890: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-795gj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3573'
Feb  9 13:30:16.988: INFO: stderr: ""
Feb  9 13:30:16.988: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  9 13:30:16.989: INFO: validating pod update-demo-nautilus-795gj
Feb  9 13:30:16.997: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  9 13:30:16.997: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  9 13:30:16.997: INFO: update-demo-nautilus-795gj is verified up and running
Feb  9 13:30:16.997: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7wjtt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3573'
Feb  9 13:30:17.083: INFO: stderr: ""
Feb  9 13:30:17.083: INFO: stdout: "true"
Feb  9 13:30:17.083: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7wjtt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3573'
Feb  9 13:30:17.160: INFO: stderr: ""
Feb  9 13:30:17.160: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  9 13:30:17.160: INFO: validating pod update-demo-nautilus-7wjtt
Feb  9 13:30:17.166: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  9 13:30:17.166: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  9 13:30:17.166: INFO: update-demo-nautilus-7wjtt is verified up and running
STEP: using delete to clean up resources
Feb  9 13:30:17.166: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3573'
Feb  9 13:30:17.258: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  9 13:30:17.258: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Feb  9 13:30:17.259: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3573'
Feb  9 13:30:17.362: INFO: stderr: "No resources found.\n"
Feb  9 13:30:17.363: INFO: stdout: ""
Feb  9 13:30:17.363: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3573 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb  9 13:30:17.511: INFO: stderr: ""
Feb  9 13:30:17.512: INFO: stdout: "update-demo-nautilus-795gj\nupdate-demo-nautilus-7wjtt\n"
Feb  9 13:30:18.012: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3573'
Feb  9 13:30:19.023: INFO: stderr: "No resources found.\n"
Feb  9 13:30:19.023: INFO: stdout: ""
Feb  9 13:30:19.023: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3573 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb  9 13:30:19.195: INFO: stderr: ""
Feb  9 13:30:19.195: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 13:30:19.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3573" for this suite.
Feb  9 13:30:42.124: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 13:30:42.332: INFO: namespace kubectl-3573 deletion completed in 23.125090291s

• [SLOW TEST:61.509 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 13:30:42.332: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-a4a468b0-b2a3-443e-9bf1-d1de479dd536
STEP: Creating a pod to test consume configMaps
Feb  9 13:30:42.449: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-de148f63-3e64-4e69-9af0-469f1fda5af8" in namespace "projected-3245" to be "success or failure"
Feb  9 13:30:42.461: INFO: Pod "pod-projected-configmaps-de148f63-3e64-4e69-9af0-469f1fda5af8": Phase="Pending", Reason="", readiness=false. Elapsed: 12.014565ms
Feb  9 13:30:44.475: INFO: Pod "pod-projected-configmaps-de148f63-3e64-4e69-9af0-469f1fda5af8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026109751s
Feb  9 13:30:46.491: INFO: Pod "pod-projected-configmaps-de148f63-3e64-4e69-9af0-469f1fda5af8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042095983s
Feb  9 13:30:48.542: INFO: Pod "pod-projected-configmaps-de148f63-3e64-4e69-9af0-469f1fda5af8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.092326631s
Feb  9 13:30:50.575: INFO: Pod "pod-projected-configmaps-de148f63-3e64-4e69-9af0-469f1fda5af8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.125936428s
STEP: Saw pod success
Feb  9 13:30:50.576: INFO: Pod "pod-projected-configmaps-de148f63-3e64-4e69-9af0-469f1fda5af8" satisfied condition "success or failure"
Feb  9 13:30:50.582: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-de148f63-3e64-4e69-9af0-469f1fda5af8 container projected-configmap-volume-test: 
STEP: delete the pod
Feb  9 13:30:50.702: INFO: Waiting for pod pod-projected-configmaps-de148f63-3e64-4e69-9af0-469f1fda5af8 to disappear
Feb  9 13:30:50.706: INFO: Pod pod-projected-configmaps-de148f63-3e64-4e69-9af0-469f1fda5af8 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 13:30:50.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3245" for this suite.
Feb  9 13:30:56.727: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 13:30:56.879: INFO: namespace projected-3245 deletion completed in 6.168517801s

• [SLOW TEST:14.546 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 13:30:56.879: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-76c923e6-6923-4a30-9c51-ea3ecc096da9
STEP: Creating a pod to test consume secrets
Feb  9 13:30:57.348: INFO: Waiting up to 5m0s for pod "pod-secrets-a233a0cb-7f09-4fbf-8880-ef63d4cbb835" in namespace "secrets-5721" to be "success or failure"
Feb  9 13:30:57.370: INFO: Pod "pod-secrets-a233a0cb-7f09-4fbf-8880-ef63d4cbb835": Phase="Pending", Reason="", readiness=false. Elapsed: 21.994517ms
Feb  9 13:30:59.379: INFO: Pod "pod-secrets-a233a0cb-7f09-4fbf-8880-ef63d4cbb835": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031306245s
Feb  9 13:31:01.397: INFO: Pod "pod-secrets-a233a0cb-7f09-4fbf-8880-ef63d4cbb835": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048450279s
Feb  9 13:31:03.408: INFO: Pod "pod-secrets-a233a0cb-7f09-4fbf-8880-ef63d4cbb835": Phase="Pending", Reason="", readiness=false. Elapsed: 6.060339156s
Feb  9 13:31:05.419: INFO: Pod "pod-secrets-a233a0cb-7f09-4fbf-8880-ef63d4cbb835": Phase="Pending", Reason="", readiness=false. Elapsed: 8.070734205s
Feb  9 13:31:07.438: INFO: Pod "pod-secrets-a233a0cb-7f09-4fbf-8880-ef63d4cbb835": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.090136158s
STEP: Saw pod success
Feb  9 13:31:07.439: INFO: Pod "pod-secrets-a233a0cb-7f09-4fbf-8880-ef63d4cbb835" satisfied condition "success or failure"
Feb  9 13:31:07.447: INFO: Trying to get logs from node iruya-node pod pod-secrets-a233a0cb-7f09-4fbf-8880-ef63d4cbb835 container secret-volume-test: 
STEP: delete the pod
Feb  9 13:31:07.698: INFO: Waiting for pod pod-secrets-a233a0cb-7f09-4fbf-8880-ef63d4cbb835 to disappear
Feb  9 13:31:07.726: INFO: Pod pod-secrets-a233a0cb-7f09-4fbf-8880-ef63d4cbb835 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 13:31:07.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5721" for this suite.
Feb  9 13:31:15.756: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 13:31:15.948: INFO: namespace secrets-5721 deletion completed in 8.215795398s
STEP: Destroying namespace "secret-namespace-7458" for this suite.
Feb  9 13:31:21.981: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 13:31:22.121: INFO: namespace secret-namespace-7458 deletion completed in 6.173560644s

• [SLOW TEST:25.242 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 13:31:22.122: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-b1e22b04-8743-4945-a10d-3ca754c92d95
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 13:31:32.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6695" for this suite.
Feb  9 13:31:54.460: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 13:31:54.599: INFO: namespace configmap-6695 deletion completed in 22.175444117s

• [SLOW TEST:32.476 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 13:31:54.599: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Feb  9 13:31:54.675: INFO: Waiting up to 5m0s for pod "pod-5ed43763-3bbb-4cc8-bbee-aed9cb4faa2a" in namespace "emptydir-7450" to be "success or failure"
Feb  9 13:31:54.679: INFO: Pod "pod-5ed43763-3bbb-4cc8-bbee-aed9cb4faa2a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.913898ms
Feb  9 13:31:56.688: INFO: Pod "pod-5ed43763-3bbb-4cc8-bbee-aed9cb4faa2a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012397762s
Feb  9 13:31:58.696: INFO: Pod "pod-5ed43763-3bbb-4cc8-bbee-aed9cb4faa2a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021263708s
Feb  9 13:32:00.705: INFO: Pod "pod-5ed43763-3bbb-4cc8-bbee-aed9cb4faa2a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.029686706s
Feb  9 13:32:02.724: INFO: Pod "pod-5ed43763-3bbb-4cc8-bbee-aed9cb4faa2a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.049301081s
Feb  9 13:32:04.735: INFO: Pod "pod-5ed43763-3bbb-4cc8-bbee-aed9cb4faa2a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.060103997s
STEP: Saw pod success
Feb  9 13:32:04.735: INFO: Pod "pod-5ed43763-3bbb-4cc8-bbee-aed9cb4faa2a" satisfied condition "success or failure"
Feb  9 13:32:04.740: INFO: Trying to get logs from node iruya-node pod pod-5ed43763-3bbb-4cc8-bbee-aed9cb4faa2a container test-container: 
STEP: delete the pod
Feb  9 13:32:04.810: INFO: Waiting for pod pod-5ed43763-3bbb-4cc8-bbee-aed9cb4faa2a to disappear
Feb  9 13:32:04.815: INFO: Pod pod-5ed43763-3bbb-4cc8-bbee-aed9cb4faa2a no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 13:32:04.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7450" for this suite.
Feb  9 13:32:10.844: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 13:32:10.986: INFO: namespace emptydir-7450 deletion completed in 6.162621489s

• [SLOW TEST:16.387 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 13:32:10.987: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the initial replication controller
Feb  9 13:32:11.115: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9884'
Feb  9 13:32:12.917: INFO: stderr: ""
Feb  9 13:32:12.917: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb  9 13:32:12.918: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9884'
Feb  9 13:32:13.058: INFO: stderr: ""
Feb  9 13:32:13.059: INFO: stdout: "update-demo-nautilus-9kw4s update-demo-nautilus-zk59j "
Feb  9 13:32:13.059: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9kw4s -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9884'
Feb  9 13:32:13.165: INFO: stderr: ""
Feb  9 13:32:13.165: INFO: stdout: ""
Feb  9 13:32:13.165: INFO: update-demo-nautilus-9kw4s is created but not running
Feb  9 13:32:18.166: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9884'
Feb  9 13:32:19.309: INFO: stderr: ""
Feb  9 13:32:19.309: INFO: stdout: "update-demo-nautilus-9kw4s update-demo-nautilus-zk59j "
Feb  9 13:32:19.310: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9kw4s -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9884'
Feb  9 13:32:19.755: INFO: stderr: ""
Feb  9 13:32:19.756: INFO: stdout: ""
Feb  9 13:32:19.756: INFO: update-demo-nautilus-9kw4s is created but not running
Feb  9 13:32:24.756: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9884'
Feb  9 13:32:24.879: INFO: stderr: ""
Feb  9 13:32:24.879: INFO: stdout: "update-demo-nautilus-9kw4s update-demo-nautilus-zk59j "
Feb  9 13:32:24.879: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9kw4s -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9884'
Feb  9 13:32:24.967: INFO: stderr: ""
Feb  9 13:32:24.967: INFO: stdout: "true"
Feb  9 13:32:24.968: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9kw4s -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9884'
Feb  9 13:32:25.044: INFO: stderr: ""
Feb  9 13:32:25.044: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  9 13:32:25.044: INFO: validating pod update-demo-nautilus-9kw4s
Feb  9 13:32:25.056: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  9 13:32:25.056: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  9 13:32:25.056: INFO: update-demo-nautilus-9kw4s is verified up and running
Feb  9 13:32:25.056: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zk59j -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9884'
Feb  9 13:32:25.141: INFO: stderr: ""
Feb  9 13:32:25.141: INFO: stdout: "true"
Feb  9 13:32:25.142: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zk59j -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9884'
Feb  9 13:32:25.232: INFO: stderr: ""
Feb  9 13:32:25.232: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  9 13:32:25.233: INFO: validating pod update-demo-nautilus-zk59j
Feb  9 13:32:25.244: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  9 13:32:25.244: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  9 13:32:25.244: INFO: update-demo-nautilus-zk59j is verified up and running
STEP: rolling-update to new replication controller
Feb  9 13:32:25.246: INFO: scanned /root for discovery docs: 
Feb  9 13:32:25.246: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-9884'
Feb  9 13:32:56.444: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Feb  9 13:32:56.445: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb  9 13:32:56.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9884'
Feb  9 13:32:56.556: INFO: stderr: ""
Feb  9 13:32:56.556: INFO: stdout: "update-demo-kitten-tmxv4 update-demo-kitten-w9ctd "
Feb  9 13:32:56.556: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-tmxv4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9884'
Feb  9 13:32:56.640: INFO: stderr: ""
Feb  9 13:32:56.640: INFO: stdout: "true"
Feb  9 13:32:56.640: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-tmxv4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9884'
Feb  9 13:32:56.719: INFO: stderr: ""
Feb  9 13:32:56.719: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Feb  9 13:32:56.719: INFO: validating pod update-demo-kitten-tmxv4
Feb  9 13:32:56.744: INFO: got data: {
  "image": "kitten.jpg"
}

Feb  9 13:32:56.745: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Feb  9 13:32:56.745: INFO: update-demo-kitten-tmxv4 is verified up and running
Feb  9 13:32:56.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-w9ctd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9884'
Feb  9 13:32:56.821: INFO: stderr: ""
Feb  9 13:32:56.822: INFO: stdout: "true"
Feb  9 13:32:56.822: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-w9ctd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9884'
Feb  9 13:32:56.894: INFO: stderr: ""
Feb  9 13:32:56.894: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Feb  9 13:32:56.894: INFO: validating pod update-demo-kitten-w9ctd
Feb  9 13:32:56.900: INFO: got data: {
  "image": "kitten.jpg"
}

Feb  9 13:32:56.900: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Feb  9 13:32:56.900: INFO: update-demo-kitten-w9ctd is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 13:32:56.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9884" for this suite.
Feb  9 13:33:22.929: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 13:33:23.044: INFO: namespace kubectl-9884 deletion completed in 26.13922415s

• [SLOW TEST:72.057 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 13:33:23.044: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  9 13:33:23.125: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3947862f-c4b2-49ee-a150-d21d07f83856" in namespace "downward-api-2935" to be "success or failure"
Feb  9 13:33:23.182: INFO: Pod "downwardapi-volume-3947862f-c4b2-49ee-a150-d21d07f83856": Phase="Pending", Reason="", readiness=false. Elapsed: 57.391529ms
Feb  9 13:33:25.193: INFO: Pod "downwardapi-volume-3947862f-c4b2-49ee-a150-d21d07f83856": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068164654s
Feb  9 13:33:27.203: INFO: Pod "downwardapi-volume-3947862f-c4b2-49ee-a150-d21d07f83856": Phase="Pending", Reason="", readiness=false. Elapsed: 4.078092s
Feb  9 13:33:29.215: INFO: Pod "downwardapi-volume-3947862f-c4b2-49ee-a150-d21d07f83856": Phase="Pending", Reason="", readiness=false. Elapsed: 6.090366489s
Feb  9 13:33:31.230: INFO: Pod "downwardapi-volume-3947862f-c4b2-49ee-a150-d21d07f83856": Phase="Pending", Reason="", readiness=false. Elapsed: 8.105403522s
Feb  9 13:33:33.239: INFO: Pod "downwardapi-volume-3947862f-c4b2-49ee-a150-d21d07f83856": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.114011901s
STEP: Saw pod success
Feb  9 13:33:33.239: INFO: Pod "downwardapi-volume-3947862f-c4b2-49ee-a150-d21d07f83856" satisfied condition "success or failure"
Feb  9 13:33:33.244: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-3947862f-c4b2-49ee-a150-d21d07f83856 container client-container: 
STEP: delete the pod
Feb  9 13:33:33.363: INFO: Waiting for pod downwardapi-volume-3947862f-c4b2-49ee-a150-d21d07f83856 to disappear
Feb  9 13:33:33.381: INFO: Pod downwardapi-volume-3947862f-c4b2-49ee-a150-d21d07f83856 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 13:33:33.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2935" for this suite.
Feb  9 13:33:39.523: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 13:33:39.671: INFO: namespace downward-api-2935 deletion completed in 6.282354852s

• [SLOW TEST:16.627 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 13:33:39.671: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-projected-7n9w
STEP: Creating a pod to test atomic-volume-subpath
Feb  9 13:33:39.811: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-7n9w" in namespace "subpath-31" to be "success or failure"
Feb  9 13:33:39.839: INFO: Pod "pod-subpath-test-projected-7n9w": Phase="Pending", Reason="", readiness=false. Elapsed: 27.892318ms
Feb  9 13:33:41.863: INFO: Pod "pod-subpath-test-projected-7n9w": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051345601s
Feb  9 13:33:43.883: INFO: Pod "pod-subpath-test-projected-7n9w": Phase="Pending", Reason="", readiness=false. Elapsed: 4.071352106s
Feb  9 13:33:45.895: INFO: Pod "pod-subpath-test-projected-7n9w": Phase="Pending", Reason="", readiness=false. Elapsed: 6.083964974s
Feb  9 13:33:47.912: INFO: Pod "pod-subpath-test-projected-7n9w": Phase="Pending", Reason="", readiness=false. Elapsed: 8.100364272s
Feb  9 13:33:49.920: INFO: Pod "pod-subpath-test-projected-7n9w": Phase="Running", Reason="", readiness=true. Elapsed: 10.108824502s
Feb  9 13:33:51.929: INFO: Pod "pod-subpath-test-projected-7n9w": Phase="Running", Reason="", readiness=true. Elapsed: 12.117853791s
Feb  9 13:33:53.944: INFO: Pod "pod-subpath-test-projected-7n9w": Phase="Running", Reason="", readiness=true. Elapsed: 14.132982409s
Feb  9 13:33:55.953: INFO: Pod "pod-subpath-test-projected-7n9w": Phase="Running", Reason="", readiness=true. Elapsed: 16.142124182s
Feb  9 13:33:57.966: INFO: Pod "pod-subpath-test-projected-7n9w": Phase="Running", Reason="", readiness=true. Elapsed: 18.154596463s
Feb  9 13:33:59.974: INFO: Pod "pod-subpath-test-projected-7n9w": Phase="Running", Reason="", readiness=true. Elapsed: 20.163146923s
Feb  9 13:34:01.982: INFO: Pod "pod-subpath-test-projected-7n9w": Phase="Running", Reason="", readiness=true. Elapsed: 22.170536057s
Feb  9 13:34:03.994: INFO: Pod "pod-subpath-test-projected-7n9w": Phase="Running", Reason="", readiness=true. Elapsed: 24.182652346s
Feb  9 13:34:06.013: INFO: Pod "pod-subpath-test-projected-7n9w": Phase="Running", Reason="", readiness=true. Elapsed: 26.201773003s
Feb  9 13:34:08.020: INFO: Pod "pod-subpath-test-projected-7n9w": Phase="Running", Reason="", readiness=true. Elapsed: 28.208610397s
Feb  9 13:34:10.041: INFO: Pod "pod-subpath-test-projected-7n9w": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.230011839s
STEP: Saw pod success
Feb  9 13:34:10.041: INFO: Pod "pod-subpath-test-projected-7n9w" satisfied condition "success or failure"
Feb  9 13:34:10.064: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-projected-7n9w container test-container-subpath-projected-7n9w: 
STEP: delete the pod
Feb  9 13:34:10.204: INFO: Waiting for pod pod-subpath-test-projected-7n9w to disappear
Feb  9 13:34:10.208: INFO: Pod pod-subpath-test-projected-7n9w no longer exists
STEP: Deleting pod pod-subpath-test-projected-7n9w
Feb  9 13:34:10.209: INFO: Deleting pod "pod-subpath-test-projected-7n9w" in namespace "subpath-31"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 13:34:10.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-31" for this suite.
Feb  9 13:34:16.245: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 13:34:16.410: INFO: namespace subpath-31 deletion completed in 6.190365091s

• [SLOW TEST:36.739 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 13:34:16.411: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-5561
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb  9 13:34:16.498: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb  9 13:34:52.696: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.44.0.1&port=8080&tries=1'] Namespace:pod-network-test-5561 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  9 13:34:52.696: INFO: >>> kubeConfig: /root/.kube/config
I0209 13:34:52.760185       8 log.go:172] (0xc0000ed600) (0xc0005a1220) Create stream
I0209 13:34:52.760222       8 log.go:172] (0xc0000ed600) (0xc0005a1220) Stream added, broadcasting: 1
I0209 13:34:52.766419       8 log.go:172] (0xc0000ed600) Reply frame received for 1
I0209 13:34:52.766453       8 log.go:172] (0xc0000ed600) (0xc0012801e0) Create stream
I0209 13:34:52.766463       8 log.go:172] (0xc0000ed600) (0xc0012801e0) Stream added, broadcasting: 3
I0209 13:34:52.767914       8 log.go:172] (0xc0000ed600) Reply frame received for 3
I0209 13:34:52.767980       8 log.go:172] (0xc0000ed600) (0xc0002b9400) Create stream
I0209 13:34:52.767994       8 log.go:172] (0xc0000ed600) (0xc0002b9400) Stream added, broadcasting: 5
I0209 13:34:52.769707       8 log.go:172] (0xc0000ed600) Reply frame received for 5
I0209 13:34:53.023753       8 log.go:172] (0xc0000ed600) Data frame received for 3
I0209 13:34:53.023846       8 log.go:172] (0xc0012801e0) (3) Data frame handling
I0209 13:34:53.023864       8 log.go:172] (0xc0012801e0) (3) Data frame sent
I0209 13:34:53.136628       8 log.go:172] (0xc0000ed600) Data frame received for 1
I0209 13:34:53.136864       8 log.go:172] (0xc0005a1220) (1) Data frame handling
I0209 13:34:53.136918       8 log.go:172] (0xc0005a1220) (1) Data frame sent
I0209 13:34:53.138970       8 log.go:172] (0xc0000ed600) (0xc0005a1220) Stream removed, broadcasting: 1
I0209 13:34:53.139260       8 log.go:172] (0xc0000ed600) (0xc0012801e0) Stream removed, broadcasting: 3
I0209 13:34:53.140840       8 log.go:172] (0xc0000ed600) (0xc0002b9400) Stream removed, broadcasting: 5
I0209 13:34:53.140923       8 log.go:172] (0xc0000ed600) (0xc0005a1220) Stream removed, broadcasting: 1
I0209 13:34:53.140937       8 log.go:172] (0xc0000ed600) (0xc0012801e0) Stream removed, broadcasting: 3
I0209 13:34:53.140951       8 log.go:172] (0xc0000ed600) (0xc0002b9400) Stream removed, broadcasting: 5
I0209 13:34:53.141924       8 log.go:172] (0xc0000ed600) Go away received
Feb  9 13:34:53.142: INFO: Waiting for endpoints: map[]
Feb  9 13:34:53.149: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:pod-network-test-5561 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  9 13:34:53.149: INFO: >>> kubeConfig: /root/.kube/config
I0209 13:34:53.205075       8 log.go:172] (0xc000c176b0) (0xc001b78a00) Create stream
I0209 13:34:53.205125       8 log.go:172] (0xc000c176b0) (0xc001b78a00) Stream added, broadcasting: 1
I0209 13:34:53.211535       8 log.go:172] (0xc000c176b0) Reply frame received for 1
I0209 13:34:53.211610       8 log.go:172] (0xc000c176b0) (0xc001b78aa0) Create stream
I0209 13:34:53.211622       8 log.go:172] (0xc000c176b0) (0xc001b78aa0) Stream added, broadcasting: 3
I0209 13:34:53.213046       8 log.go:172] (0xc000c176b0) Reply frame received for 3
I0209 13:34:53.213074       8 log.go:172] (0xc000c176b0) (0xc0000fe960) Create stream
I0209 13:34:53.213086       8 log.go:172] (0xc000c176b0) (0xc0000fe960) Stream added, broadcasting: 5
I0209 13:34:53.214429       8 log.go:172] (0xc000c176b0) Reply frame received for 5
I0209 13:34:53.325697       8 log.go:172] (0xc000c176b0) Data frame received for 3
I0209 13:34:53.325759       8 log.go:172] (0xc001b78aa0) (3) Data frame handling
I0209 13:34:53.325790       8 log.go:172] (0xc001b78aa0) (3) Data frame sent
I0209 13:34:53.467207       8 log.go:172] (0xc000c176b0) (0xc001b78aa0) Stream removed, broadcasting: 3
I0209 13:34:53.467376       8 log.go:172] (0xc000c176b0) Data frame received for 1
I0209 13:34:53.467487       8 log.go:172] (0xc000c176b0) (0xc0000fe960) Stream removed, broadcasting: 5
I0209 13:34:53.467559       8 log.go:172] (0xc001b78a00) (1) Data frame handling
I0209 13:34:53.467601       8 log.go:172] (0xc001b78a00) (1) Data frame sent
I0209 13:34:53.467617       8 log.go:172] (0xc000c176b0) (0xc001b78a00) Stream removed, broadcasting: 1
I0209 13:34:53.467641       8 log.go:172] (0xc000c176b0) Go away received
I0209 13:34:53.467919       8 log.go:172] (0xc000c176b0) (0xc001b78a00) Stream removed, broadcasting: 1
I0209 13:34:53.467954       8 log.go:172] (0xc000c176b0) (0xc001b78aa0) Stream removed, broadcasting: 3
I0209 13:34:53.467972       8 log.go:172] (0xc000c176b0) (0xc0000fe960) Stream removed, broadcasting: 5
Feb  9 13:34:53.468: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 13:34:53.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-5561" for this suite.
Feb  9 13:35:17.504: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 13:35:17.650: INFO: namespace pod-network-test-5561 deletion completed in 24.171612845s

• [SLOW TEST:61.239 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 13:35:17.651: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-6227.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-6227.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6227.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-6227.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-6227.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6227.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb  9 13:35:29.850: INFO: Unable to read wheezy_udp@PodARecord from pod dns-6227/dns-test-704a9507-cad9-4d04-b60d-2acddab4fb36: the server could not find the requested resource (get pods dns-test-704a9507-cad9-4d04-b60d-2acddab4fb36)
Feb  9 13:35:29.859: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-6227/dns-test-704a9507-cad9-4d04-b60d-2acddab4fb36: the server could not find the requested resource (get pods dns-test-704a9507-cad9-4d04-b60d-2acddab4fb36)
Feb  9 13:35:29.878: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.dns-6227.svc.cluster.local from pod dns-6227/dns-test-704a9507-cad9-4d04-b60d-2acddab4fb36: the server could not find the requested resource (get pods dns-test-704a9507-cad9-4d04-b60d-2acddab4fb36)
Feb  9 13:35:29.898: INFO: Unable to read jessie_hosts@dns-querier-1 from pod dns-6227/dns-test-704a9507-cad9-4d04-b60d-2acddab4fb36: the server could not find the requested resource (get pods dns-test-704a9507-cad9-4d04-b60d-2acddab4fb36)
Feb  9 13:35:29.904: INFO: Unable to read jessie_udp@PodARecord from pod dns-6227/dns-test-704a9507-cad9-4d04-b60d-2acddab4fb36: the server could not find the requested resource (get pods dns-test-704a9507-cad9-4d04-b60d-2acddab4fb36)
Feb  9 13:35:29.909: INFO: Unable to read jessie_tcp@PodARecord from pod dns-6227/dns-test-704a9507-cad9-4d04-b60d-2acddab4fb36: the server could not find the requested resource (get pods dns-test-704a9507-cad9-4d04-b60d-2acddab4fb36)
Feb  9 13:35:29.909: INFO: Lookups using dns-6227/dns-test-704a9507-cad9-4d04-b60d-2acddab4fb36 failed for: [wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-1.dns-test-service.dns-6227.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Feb  9 13:35:34.961: INFO: DNS probes using dns-6227/dns-test-704a9507-cad9-4d04-b60d-2acddab4fb36 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 13:35:35.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6227" for this suite.
Feb  9 13:35:41.164: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 13:35:41.297: INFO: namespace dns-6227 deletion completed in 6.200018763s

• [SLOW TEST:23.647 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 13:35:41.298: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  9 13:35:41.345: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-6785'
Feb  9 13:35:41.506: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb  9 13:35:41.506: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Feb  9 13:35:41.561: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-m4vbw]
Feb  9 13:35:41.561: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-m4vbw" in namespace "kubectl-6785" to be "running and ready"
Feb  9 13:35:41.581: INFO: Pod "e2e-test-nginx-rc-m4vbw": Phase="Pending", Reason="", readiness=false. Elapsed: 19.668234ms
Feb  9 13:35:43.592: INFO: Pod "e2e-test-nginx-rc-m4vbw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030388916s
Feb  9 13:35:45.653: INFO: Pod "e2e-test-nginx-rc-m4vbw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.09131312s
Feb  9 13:35:47.660: INFO: Pod "e2e-test-nginx-rc-m4vbw": Phase="Pending", Reason="", readiness=false. Elapsed: 6.097921953s
Feb  9 13:35:49.669: INFO: Pod "e2e-test-nginx-rc-m4vbw": Phase="Pending", Reason="", readiness=false. Elapsed: 8.107376054s
Feb  9 13:35:51.679: INFO: Pod "e2e-test-nginx-rc-m4vbw": Phase="Running", Reason="", readiness=true. Elapsed: 10.117367442s
Feb  9 13:35:51.679: INFO: Pod "e2e-test-nginx-rc-m4vbw" satisfied condition "running and ready"
Feb  9 13:35:51.679: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-m4vbw]
Feb  9 13:35:51.679: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-6785'
Feb  9 13:35:51.920: INFO: stderr: ""
Feb  9 13:35:51.920: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461
Feb  9 13:35:51.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-6785'
Feb  9 13:35:52.035: INFO: stderr: ""
Feb  9 13:35:52.036: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 13:35:52.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6785" for this suite.
Feb  9 13:36:14.080: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 13:36:14.187: INFO: namespace kubectl-6785 deletion completed in 22.135826613s

• [SLOW TEST:32.889 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 13:36:14.188: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-247c17e6-65f9-4f8c-988b-4dac5c2f8972
STEP: Creating a pod to test consume configMaps
Feb  9 13:36:14.321: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-be66ad0c-f909-4cd2-bbef-126a0bea0b0c" in namespace "projected-2460" to be "success or failure"
Feb  9 13:36:14.336: INFO: Pod "pod-projected-configmaps-be66ad0c-f909-4cd2-bbef-126a0bea0b0c": Phase="Pending", Reason="", readiness=false. Elapsed: 14.766408ms
Feb  9 13:36:16.352: INFO: Pod "pod-projected-configmaps-be66ad0c-f909-4cd2-bbef-126a0bea0b0c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030111751s
Feb  9 13:36:18.364: INFO: Pod "pod-projected-configmaps-be66ad0c-f909-4cd2-bbef-126a0bea0b0c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04294339s
Feb  9 13:36:20.374: INFO: Pod "pod-projected-configmaps-be66ad0c-f909-4cd2-bbef-126a0bea0b0c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052247417s
Feb  9 13:36:22.383: INFO: Pod "pod-projected-configmaps-be66ad0c-f909-4cd2-bbef-126a0bea0b0c": Phase="Running", Reason="", readiness=true. Elapsed: 8.061724798s
Feb  9 13:36:24.391: INFO: Pod "pod-projected-configmaps-be66ad0c-f909-4cd2-bbef-126a0bea0b0c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.069609275s
STEP: Saw pod success
Feb  9 13:36:24.391: INFO: Pod "pod-projected-configmaps-be66ad0c-f909-4cd2-bbef-126a0bea0b0c" satisfied condition "success or failure"
Feb  9 13:36:24.396: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-be66ad0c-f909-4cd2-bbef-126a0bea0b0c container projected-configmap-volume-test: 
STEP: delete the pod
Feb  9 13:36:24.947: INFO: Waiting for pod pod-projected-configmaps-be66ad0c-f909-4cd2-bbef-126a0bea0b0c to disappear
Feb  9 13:36:24.963: INFO: Pod pod-projected-configmaps-be66ad0c-f909-4cd2-bbef-126a0bea0b0c no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 13:36:24.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2460" for this suite.
Feb  9 13:36:31.123: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 13:36:31.369: INFO: namespace projected-2460 deletion completed in 6.292396849s

• [SLOW TEST:17.182 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 13:36:31.371: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  9 13:36:31.464: INFO: Creating deployment "test-recreate-deployment"
Feb  9 13:36:31.471: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Feb  9 13:36:31.502: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
Feb  9 13:36:33.524: INFO: Waiting deployment "test-recreate-deployment" to complete
Feb  9 13:36:33.529: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716852191, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716852191, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716852191, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716852191, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  9 13:36:35.538: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716852191, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716852191, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716852191, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716852191, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  9 13:36:37.540: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716852191, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716852191, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716852191, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716852191, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  9 13:36:39.541: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716852191, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716852191, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716852191, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716852191, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  9 13:36:41.543: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Feb  9 13:36:41.553: INFO: Updating deployment test-recreate-deployment
Feb  9 13:36:41.553: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Feb  9 13:36:42.011: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-4004,SelfLink:/apis/apps/v1/namespaces/deployment-4004/deployments/test-recreate-deployment,UID:4e1a27f1-dfaa-4f01-a534-188d96a436f1,ResourceVersion:23698422,Generation:2,CreationTimestamp:2020-02-09 13:36:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-02-09 13:36:41 +0000 UTC 2020-02-09 13:36:41 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-02-09 13:36:41 +0000 UTC 2020-02-09 13:36:31 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Feb  9 13:36:42.040: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-4004,SelfLink:/apis/apps/v1/namespaces/deployment-4004/replicasets/test-recreate-deployment-5c8c9cc69d,UID:f2fd48b6-4330-4391-9d60-70d8d64a1e0d,ResourceVersion:23698421,Generation:1,CreationTimestamp:2020-02-09 13:36:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 4e1a27f1-dfaa-4f01-a534-188d96a436f1 0xc001d7ccb7 0xc001d7ccb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb  9 13:36:42.040: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Feb  9 13:36:42.041: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-4004,SelfLink:/apis/apps/v1/namespaces/deployment-4004/replicasets/test-recreate-deployment-6df85df6b9,UID:5489de30-247a-4cae-a61b-948b821c1ab2,ResourceVersion:23698409,Generation:2,CreationTimestamp:2020-02-09 13:36:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 4e1a27f1-dfaa-4f01-a534-188d96a436f1 0xc001d7cdb7 0xc001d7cdb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb  9 13:36:42.052: INFO: Pod "test-recreate-deployment-5c8c9cc69d-cj7h2" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-cj7h2,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-4004,SelfLink:/api/v1/namespaces/deployment-4004/pods/test-recreate-deployment-5c8c9cc69d-cj7h2,UID:17d03fb2-e459-4f15-b784-7a11ee6709cc,ResourceVersion:23698423,Generation:0,CreationTimestamp:2020-02-09 13:36:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d f2fd48b6-4330-4391-9d60-70d8d64a1e0d 0xc00005b9d7 0xc00005b9d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-mr4vv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mr4vv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-mr4vv true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00005bd00} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00005bd20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 13:36:41 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 13:36:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 13:36:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 13:36:41 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-09 13:36:41 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 13:36:42.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-4004" for this suite.
Feb  9 13:36:50.106: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 13:36:50.299: INFO: namespace deployment-4004 deletion completed in 8.228912082s

• [SLOW TEST:18.929 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 13:36:50.300: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
STEP: reading a file in the container
Feb  9 13:36:59.156: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4852 pod-service-account-45db8eb4-88f2-4559-882e-88ead53c2656 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Feb  9 13:36:59.748: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4852 pod-service-account-45db8eb4-88f2-4559-882e-88ead53c2656 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Feb  9 13:37:00.196: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4852 pod-service-account-45db8eb4-88f2-4559-882e-88ead53c2656 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 13:37:00.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-4852" for this suite.
Feb  9 13:37:06.729: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 13:37:06.985: INFO: namespace svcaccounts-4852 deletion completed in 6.288391281s

• [SLOW TEST:16.686 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 13:37:06.986: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-8770
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb  9 13:37:07.067: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb  9 13:37:41.247: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8770 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  9 13:37:41.247: INFO: >>> kubeConfig: /root/.kube/config
I0209 13:37:41.316530       8 log.go:172] (0xc00180c9a0) (0xc00242b860) Create stream
I0209 13:37:41.316617       8 log.go:172] (0xc00180c9a0) (0xc00242b860) Stream added, broadcasting: 1
I0209 13:37:41.323550       8 log.go:172] (0xc00180c9a0) Reply frame received for 1
I0209 13:37:41.323596       8 log.go:172] (0xc00180c9a0) (0xc001c66aa0) Create stream
I0209 13:37:41.323610       8 log.go:172] (0xc00180c9a0) (0xc001c66aa0) Stream added, broadcasting: 3
I0209 13:37:41.327498       8 log.go:172] (0xc00180c9a0) Reply frame received for 3
I0209 13:37:41.327722       8 log.go:172] (0xc00180c9a0) (0xc00242b900) Create stream
I0209 13:37:41.327740       8 log.go:172] (0xc00180c9a0) (0xc00242b900) Stream added, broadcasting: 5
I0209 13:37:41.329847       8 log.go:172] (0xc00180c9a0) Reply frame received for 5
I0209 13:37:42.488240       8 log.go:172] (0xc00180c9a0) Data frame received for 3
I0209 13:37:42.488376       8 log.go:172] (0xc001c66aa0) (3) Data frame handling
I0209 13:37:42.488409       8 log.go:172] (0xc001c66aa0) (3) Data frame sent
I0209 13:37:42.753349       8 log.go:172] (0xc00180c9a0) (0xc001c66aa0) Stream removed, broadcasting: 3
I0209 13:37:42.753566       8 log.go:172] (0xc00180c9a0) Data frame received for 1
I0209 13:37:42.753587       8 log.go:172] (0xc00242b860) (1) Data frame handling
I0209 13:37:42.753619       8 log.go:172] (0xc00242b860) (1) Data frame sent
I0209 13:37:42.753661       8 log.go:172] (0xc00180c9a0) (0xc00242b860) Stream removed, broadcasting: 1
I0209 13:37:42.754097       8 log.go:172] (0xc00180c9a0) (0xc00242b900) Stream removed, broadcasting: 5
I0209 13:37:42.754138       8 log.go:172] (0xc00180c9a0) Go away received
I0209 13:37:42.754540       8 log.go:172] (0xc00180c9a0) (0xc00242b860) Stream removed, broadcasting: 1
I0209 13:37:42.754583       8 log.go:172] (0xc00180c9a0) (0xc001c66aa0) Stream removed, broadcasting: 3
I0209 13:37:42.754592       8 log.go:172] (0xc00180c9a0) (0xc00242b900) Stream removed, broadcasting: 5
Feb  9 13:37:42.754: INFO: Found all expected endpoints: [netserver-0]
Feb  9 13:37:42.763: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.44.0.1 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8770 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  9 13:37:42.763: INFO: >>> kubeConfig: /root/.kube/config
I0209 13:37:42.827075       8 log.go:172] (0xc001de7130) (0xc002139220) Create stream
I0209 13:37:42.827118       8 log.go:172] (0xc001de7130) (0xc002139220) Stream added, broadcasting: 1
I0209 13:37:42.834712       8 log.go:172] (0xc001de7130) Reply frame received for 1
I0209 13:37:42.834766       8 log.go:172] (0xc001de7130) (0xc0021392c0) Create stream
I0209 13:37:42.834782       8 log.go:172] (0xc001de7130) (0xc0021392c0) Stream added, broadcasting: 3
I0209 13:37:42.837348       8 log.go:172] (0xc001de7130) Reply frame received for 3
I0209 13:37:42.837515       8 log.go:172] (0xc001de7130) (0xc002139360) Create stream
I0209 13:37:42.837537       8 log.go:172] (0xc001de7130) (0xc002139360) Stream added, broadcasting: 5
I0209 13:37:42.841450       8 log.go:172] (0xc001de7130) Reply frame received for 5
I0209 13:37:43.977513       8 log.go:172] (0xc001de7130) Data frame received for 3
I0209 13:37:43.977600       8 log.go:172] (0xc0021392c0) (3) Data frame handling
I0209 13:37:43.977629       8 log.go:172] (0xc0021392c0) (3) Data frame sent
I0209 13:37:44.133011       8 log.go:172] (0xc001de7130) Data frame received for 1
I0209 13:37:44.133196       8 log.go:172] (0xc002139220) (1) Data frame handling
I0209 13:37:44.133281       8 log.go:172] (0xc002139220) (1) Data frame sent
I0209 13:37:44.133675       8 log.go:172] (0xc001de7130) (0xc0021392c0) Stream removed, broadcasting: 3
I0209 13:37:44.133915       8 log.go:172] (0xc001de7130) (0xc002139220) Stream removed, broadcasting: 1
I0209 13:37:44.135148       8 log.go:172] (0xc001de7130) (0xc002139360) Stream removed, broadcasting: 5
I0209 13:37:44.135274       8 log.go:172] (0xc001de7130) (0xc002139220) Stream removed, broadcasting: 1
I0209 13:37:44.135307       8 log.go:172] (0xc001de7130) (0xc0021392c0) Stream removed, broadcasting: 3
I0209 13:37:44.135334       8 log.go:172] (0xc001de7130) (0xc002139360) Stream removed, broadcasting: 5
I0209 13:37:44.135599       8 log.go:172] (0xc001de7130) Go away received
Feb  9 13:37:44.136: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 13:37:44.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-8770" for this suite.
Feb  9 13:38:08.176: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 13:38:08.259: INFO: namespace pod-network-test-8770 deletion completed in 24.113064963s

• [SLOW TEST:61.274 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 13:38:08.261: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1454.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-1454.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1454.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-1454.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb  9 13:38:22.455: INFO: File jessie_udp@dns-test-service-3.dns-1454.svc.cluster.local from pod  dns-1454/dns-test-825236cc-2eb3-4a08-b87b-5ba4c208c040 contains '' instead of 'foo.example.com.'
Feb  9 13:38:22.455: INFO: Lookups using dns-1454/dns-test-825236cc-2eb3-4a08-b87b-5ba4c208c040 failed for: [jessie_udp@dns-test-service-3.dns-1454.svc.cluster.local]

Feb  9 13:38:27.500: INFO: DNS probes using dns-test-825236cc-2eb3-4a08-b87b-5ba4c208c040 succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1454.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-1454.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1454.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-1454.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb  9 13:38:43.910: INFO: File wheezy_udp@dns-test-service-3.dns-1454.svc.cluster.local from pod  dns-1454/dns-test-9ff397f5-8c83-4de5-8e26-cd79245d22e9 contains '' instead of 'bar.example.com.'
Feb  9 13:38:43.920: INFO: File jessie_udp@dns-test-service-3.dns-1454.svc.cluster.local from pod  dns-1454/dns-test-9ff397f5-8c83-4de5-8e26-cd79245d22e9 contains '' instead of 'bar.example.com.'
Feb  9 13:38:43.920: INFO: Lookups using dns-1454/dns-test-9ff397f5-8c83-4de5-8e26-cd79245d22e9 failed for: [wheezy_udp@dns-test-service-3.dns-1454.svc.cluster.local jessie_udp@dns-test-service-3.dns-1454.svc.cluster.local]

Feb  9 13:38:48.939: INFO: File wheezy_udp@dns-test-service-3.dns-1454.svc.cluster.local from pod  dns-1454/dns-test-9ff397f5-8c83-4de5-8e26-cd79245d22e9 contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb  9 13:38:48.947: INFO: File jessie_udp@dns-test-service-3.dns-1454.svc.cluster.local from pod  dns-1454/dns-test-9ff397f5-8c83-4de5-8e26-cd79245d22e9 contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb  9 13:38:48.947: INFO: Lookups using dns-1454/dns-test-9ff397f5-8c83-4de5-8e26-cd79245d22e9 failed for: [wheezy_udp@dns-test-service-3.dns-1454.svc.cluster.local jessie_udp@dns-test-service-3.dns-1454.svc.cluster.local]

Feb  9 13:38:53.949: INFO: File jessie_udp@dns-test-service-3.dns-1454.svc.cluster.local from pod  dns-1454/dns-test-9ff397f5-8c83-4de5-8e26-cd79245d22e9 contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb  9 13:38:53.949: INFO: Lookups using dns-1454/dns-test-9ff397f5-8c83-4de5-8e26-cd79245d22e9 failed for: [jessie_udp@dns-test-service-3.dns-1454.svc.cluster.local]

Feb  9 13:38:58.964: INFO: DNS probes using dns-test-9ff397f5-8c83-4de5-8e26-cd79245d22e9 succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1454.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-1454.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1454.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-1454.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb  9 13:39:15.527: INFO: File wheezy_udp@dns-test-service-3.dns-1454.svc.cluster.local from pod  dns-1454/dns-test-e8ac1f55-d8cd-42a5-9811-b2154726d7de contains '' instead of '10.104.71.61'
Feb  9 13:39:15.533: INFO: File jessie_udp@dns-test-service-3.dns-1454.svc.cluster.local from pod  dns-1454/dns-test-e8ac1f55-d8cd-42a5-9811-b2154726d7de contains '' instead of '10.104.71.61'
Feb  9 13:39:15.533: INFO: Lookups using dns-1454/dns-test-e8ac1f55-d8cd-42a5-9811-b2154726d7de failed for: [wheezy_udp@dns-test-service-3.dns-1454.svc.cluster.local jessie_udp@dns-test-service-3.dns-1454.svc.cluster.local]

Feb  9 13:39:20.566: INFO: DNS probes using dns-test-e8ac1f55-d8cd-42a5-9811-b2154726d7de succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 13:39:20.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-1454" for this suite.
Feb  9 13:39:26.734: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 13:39:27.324: INFO: namespace dns-1454 deletion completed in 6.63698526s

• [SLOW TEST:79.063 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 13:39:27.324: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: delete a job
STEP: deleting Job.batch foo in namespace job-8962, will wait for the garbage collector to delete the pods
Feb  9 13:39:37.462: INFO: Deleting Job.batch foo took: 11.698311ms
Feb  9 13:39:37.762: INFO: Terminating Job.batch foo pods took: 300.582409ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 13:40:15.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-8962" for this suite.
Feb  9 13:40:21.597: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 13:40:21.711: INFO: namespace job-8962 deletion completed in 6.13743901s

• [SLOW TEST:54.387 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 13:40:21.711: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Feb  9 13:40:32.487: INFO: Successfully updated pod "labelsupdate70b72028-ba7d-4a95-9574-fa1ec2dff1b4"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 13:40:34.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9671" for this suite.
Feb  9 13:40:52.631: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 13:40:52.768: INFO: namespace projected-9671 deletion completed in 18.155916424s

• [SLOW TEST:31.057 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 13:40:52.769: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Feb  9 13:43:53.137: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  9 13:43:53.162: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  9 13:43:55.163: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  9 13:43:55.175: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  9 13:43:57.163: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  9 13:43:57.175: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  9 13:43:59.163: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  9 13:43:59.172: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  9 13:44:01.163: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  9 13:44:01.175: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  9 13:44:03.163: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  9 13:44:03.172: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  9 13:44:05.163: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  9 13:44:05.174: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  9 13:44:07.163: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  9 13:44:07.170: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  9 13:44:09.163: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  9 13:44:09.169: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  9 13:44:11.163: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  9 13:44:11.198: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  9 13:44:13.163: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  9 13:44:13.172: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  9 13:44:15.163: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  9 13:44:15.172: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  9 13:44:17.163: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  9 13:44:17.176: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  9 13:44:19.163: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  9 13:44:19.172: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  9 13:44:21.163: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  9 13:44:21.172: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  9 13:44:23.163: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  9 13:44:23.172: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  9 13:44:25.163: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  9 13:44:25.174: INFO: Pod pod-with-poststart-exec-hook still exists
Feb  9 13:44:27.163: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb  9 13:44:27.174: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 13:44:27.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-3202" for this suite.
Feb  9 13:44:49.222: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 13:44:49.307: INFO: namespace container-lifecycle-hook-3202 deletion completed in 22.12771771s

• [SLOW TEST:236.538 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 13:44:49.307: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9891.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9891.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9891.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9891.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9891.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-9891.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9891.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-9891.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9891.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-9891.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9891.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-9891.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9891.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 162.178.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.178.162_udp@PTR;check="$$(dig +tcp +noall +answer +search 162.178.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.178.162_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9891.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9891.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9891.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9891.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9891.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-9891.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9891.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-9891.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9891.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-9891.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9891.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-9891.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9891.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 162.178.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.178.162_udp@PTR;check="$$(dig +tcp +noall +answer +search 162.178.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.178.162_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb  9 13:45:03.593: INFO: Unable to read wheezy_udp@dns-test-service.dns-9891.svc.cluster.local from pod dns-9891/dns-test-8a6bf75f-d497-4b8c-9bd4-4871b9927f16: the server could not find the requested resource (get pods dns-test-8a6bf75f-d497-4b8c-9bd4-4871b9927f16)
Feb  9 13:45:03.599: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9891.svc.cluster.local from pod dns-9891/dns-test-8a6bf75f-d497-4b8c-9bd4-4871b9927f16: the server could not find the requested resource (get pods dns-test-8a6bf75f-d497-4b8c-9bd4-4871b9927f16)
Feb  9 13:45:03.606: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9891.svc.cluster.local from pod dns-9891/dns-test-8a6bf75f-d497-4b8c-9bd4-4871b9927f16: the server could not find the requested resource (get pods dns-test-8a6bf75f-d497-4b8c-9bd4-4871b9927f16)
Feb  9 13:45:03.610: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9891.svc.cluster.local from pod dns-9891/dns-test-8a6bf75f-d497-4b8c-9bd4-4871b9927f16: the server could not find the requested resource (get pods dns-test-8a6bf75f-d497-4b8c-9bd4-4871b9927f16)
Feb  9 13:45:03.619: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.dns-9891.svc.cluster.local from pod dns-9891/dns-test-8a6bf75f-d497-4b8c-9bd4-4871b9927f16: the server could not find the requested resource (get pods dns-test-8a6bf75f-d497-4b8c-9bd4-4871b9927f16)
Feb  9 13:45:03.624: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.dns-9891.svc.cluster.local from pod dns-9891/dns-test-8a6bf75f-d497-4b8c-9bd4-4871b9927f16: the server could not find the requested resource (get pods dns-test-8a6bf75f-d497-4b8c-9bd4-4871b9927f16)
Feb  9 13:45:03.629: INFO: Unable to read wheezy_udp@PodARecord from pod dns-9891/dns-test-8a6bf75f-d497-4b8c-9bd4-4871b9927f16: the server could not find the requested resource (get pods dns-test-8a6bf75f-d497-4b8c-9bd4-4871b9927f16)
Feb  9 13:45:03.634: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-9891/dns-test-8a6bf75f-d497-4b8c-9bd4-4871b9927f16: the server could not find the requested resource (get pods dns-test-8a6bf75f-d497-4b8c-9bd4-4871b9927f16)
Feb  9 13:45:03.638: INFO: Unable to read 10.98.178.162_udp@PTR from pod dns-9891/dns-test-8a6bf75f-d497-4b8c-9bd4-4871b9927f16: the server could not find the requested resource (get pods dns-test-8a6bf75f-d497-4b8c-9bd4-4871b9927f16)
Feb  9 13:45:03.641: INFO: Unable to read 10.98.178.162_tcp@PTR from pod dns-9891/dns-test-8a6bf75f-d497-4b8c-9bd4-4871b9927f16: the server could not find the requested resource (get pods dns-test-8a6bf75f-d497-4b8c-9bd4-4871b9927f16)
Feb  9 13:45:03.644: INFO: Unable to read jessie_udp@dns-test-service.dns-9891.svc.cluster.local from pod dns-9891/dns-test-8a6bf75f-d497-4b8c-9bd4-4871b9927f16: the server could not find the requested resource (get pods dns-test-8a6bf75f-d497-4b8c-9bd4-4871b9927f16)
Feb  9 13:45:03.647: INFO: Unable to read jessie_tcp@dns-test-service.dns-9891.svc.cluster.local from pod dns-9891/dns-test-8a6bf75f-d497-4b8c-9bd4-4871b9927f16: the server could not find the requested resource (get pods dns-test-8a6bf75f-d497-4b8c-9bd4-4871b9927f16)
Feb  9 13:45:03.651: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9891.svc.cluster.local from pod dns-9891/dns-test-8a6bf75f-d497-4b8c-9bd4-4871b9927f16: the server could not find the requested resource (get pods dns-test-8a6bf75f-d497-4b8c-9bd4-4871b9927f16)
Feb  9 13:45:03.654: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9891.svc.cluster.local from pod dns-9891/dns-test-8a6bf75f-d497-4b8c-9bd4-4871b9927f16: the server could not find the requested resource (get pods dns-test-8a6bf75f-d497-4b8c-9bd4-4871b9927f16)
Feb  9 13:45:03.658: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.dns-9891.svc.cluster.local from pod dns-9891/dns-test-8a6bf75f-d497-4b8c-9bd4-4871b9927f16: the server could not find the requested resource (get pods dns-test-8a6bf75f-d497-4b8c-9bd4-4871b9927f16)
Feb  9 13:45:03.661: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.dns-9891.svc.cluster.local from pod dns-9891/dns-test-8a6bf75f-d497-4b8c-9bd4-4871b9927f16: the server could not find the requested resource (get pods dns-test-8a6bf75f-d497-4b8c-9bd4-4871b9927f16)
Feb  9 13:45:03.664: INFO: Unable to read jessie_udp@PodARecord from pod dns-9891/dns-test-8a6bf75f-d497-4b8c-9bd4-4871b9927f16: the server could not find the requested resource (get pods dns-test-8a6bf75f-d497-4b8c-9bd4-4871b9927f16)
Feb  9 13:45:03.667: INFO: Unable to read jessie_tcp@PodARecord from pod dns-9891/dns-test-8a6bf75f-d497-4b8c-9bd4-4871b9927f16: the server could not find the requested resource (get pods dns-test-8a6bf75f-d497-4b8c-9bd4-4871b9927f16)
Feb  9 13:45:03.670: INFO: Unable to read 10.98.178.162_udp@PTR from pod dns-9891/dns-test-8a6bf75f-d497-4b8c-9bd4-4871b9927f16: the server could not find the requested resource (get pods dns-test-8a6bf75f-d497-4b8c-9bd4-4871b9927f16)
Feb  9 13:45:03.673: INFO: Unable to read 10.98.178.162_tcp@PTR from pod dns-9891/dns-test-8a6bf75f-d497-4b8c-9bd4-4871b9927f16: the server could not find the requested resource (get pods dns-test-8a6bf75f-d497-4b8c-9bd4-4871b9927f16)
Feb  9 13:45:03.673: INFO: Lookups using dns-9891/dns-test-8a6bf75f-d497-4b8c-9bd4-4871b9927f16 failed for: [wheezy_udp@dns-test-service.dns-9891.svc.cluster.local wheezy_tcp@dns-test-service.dns-9891.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9891.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9891.svc.cluster.local wheezy_udp@_http._tcp.test-service-2.dns-9891.svc.cluster.local wheezy_tcp@_http._tcp.test-service-2.dns-9891.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.98.178.162_udp@PTR 10.98.178.162_tcp@PTR jessie_udp@dns-test-service.dns-9891.svc.cluster.local jessie_tcp@dns-test-service.dns-9891.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9891.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9891.svc.cluster.local jessie_udp@_http._tcp.test-service-2.dns-9891.svc.cluster.local jessie_tcp@_http._tcp.test-service-2.dns-9891.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord 10.98.178.162_udp@PTR 10.98.178.162_tcp@PTR]

Feb  9 13:45:08.795: INFO: DNS probes using dns-9891/dns-test-8a6bf75f-d497-4b8c-9bd4-4871b9927f16 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 13:45:09.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-9891" for this suite.
Feb  9 13:45:15.335: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 13:45:15.500: INFO: namespace dns-9891 deletion completed in 6.260734229s

• [SLOW TEST:26.193 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 13:45:15.501: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test env composition
Feb  9 13:45:15.566: INFO: Waiting up to 5m0s for pod "var-expansion-4e116489-db10-4288-9a5b-852684a3bf8d" in namespace "var-expansion-2497" to be "success or failure"
Feb  9 13:45:15.589: INFO: Pod "var-expansion-4e116489-db10-4288-9a5b-852684a3bf8d": Phase="Pending", Reason="", readiness=false. Elapsed: 21.957441ms
Feb  9 13:45:17.599: INFO: Pod "var-expansion-4e116489-db10-4288-9a5b-852684a3bf8d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03221098s
Feb  9 13:45:19.615: INFO: Pod "var-expansion-4e116489-db10-4288-9a5b-852684a3bf8d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048356485s
Feb  9 13:45:21.624: INFO: Pod "var-expansion-4e116489-db10-4288-9a5b-852684a3bf8d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057067254s
Feb  9 13:45:23.636: INFO: Pod "var-expansion-4e116489-db10-4288-9a5b-852684a3bf8d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.069846621s
Feb  9 13:45:25.646: INFO: Pod "var-expansion-4e116489-db10-4288-9a5b-852684a3bf8d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.079367177s
STEP: Saw pod success
Feb  9 13:45:25.646: INFO: Pod "var-expansion-4e116489-db10-4288-9a5b-852684a3bf8d" satisfied condition "success or failure"
Feb  9 13:45:25.650: INFO: Trying to get logs from node iruya-node pod var-expansion-4e116489-db10-4288-9a5b-852684a3bf8d container dapi-container: 
STEP: delete the pod
Feb  9 13:45:25.831: INFO: Waiting for pod var-expansion-4e116489-db10-4288-9a5b-852684a3bf8d to disappear
Feb  9 13:45:25.844: INFO: Pod var-expansion-4e116489-db10-4288-9a5b-852684a3bf8d no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 13:45:25.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-2497" for this suite.
Feb  9 13:45:31.936: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 13:45:32.072: INFO: namespace var-expansion-2497 deletion completed in 6.215205127s

• [SLOW TEST:16.572 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 13:45:32.073: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Feb  9 13:45:32.334: INFO: Waiting up to 5m0s for pod "pod-1ccf9de7-7153-40a2-8b92-d142fd2b6b75" in namespace "emptydir-1130" to be "success or failure"
Feb  9 13:45:32.342: INFO: Pod "pod-1ccf9de7-7153-40a2-8b92-d142fd2b6b75": Phase="Pending", Reason="", readiness=false. Elapsed: 7.273078ms
Feb  9 13:45:34.351: INFO: Pod "pod-1ccf9de7-7153-40a2-8b92-d142fd2b6b75": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016483185s
Feb  9 13:45:36.363: INFO: Pod "pod-1ccf9de7-7153-40a2-8b92-d142fd2b6b75": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028189121s
Feb  9 13:45:38.372: INFO: Pod "pod-1ccf9de7-7153-40a2-8b92-d142fd2b6b75": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037799473s
Feb  9 13:45:40.387: INFO: Pod "pod-1ccf9de7-7153-40a2-8b92-d142fd2b6b75": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.052395389s
STEP: Saw pod success
Feb  9 13:45:40.387: INFO: Pod "pod-1ccf9de7-7153-40a2-8b92-d142fd2b6b75" satisfied condition "success or failure"
Feb  9 13:45:40.391: INFO: Trying to get logs from node iruya-node pod pod-1ccf9de7-7153-40a2-8b92-d142fd2b6b75 container test-container: 
STEP: delete the pod
Feb  9 13:45:40.523: INFO: Waiting for pod pod-1ccf9de7-7153-40a2-8b92-d142fd2b6b75 to disappear
Feb  9 13:45:40.528: INFO: Pod pod-1ccf9de7-7153-40a2-8b92-d142fd2b6b75 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 13:45:40.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1130" for this suite.
Feb  9 13:45:46.561: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 13:45:46.704: INFO: namespace emptydir-1130 deletion completed in 6.169771246s

• [SLOW TEST:14.632 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 13:45:46.705: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  9 13:45:46.888: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Feb  9 13:45:47.018: INFO: Number of nodes with available pods: 0
Feb  9 13:45:47.018: INFO: Node iruya-node is running more than one daemon pod
Feb  9 13:45:48.925: INFO: Number of nodes with available pods: 0
Feb  9 13:45:48.925: INFO: Node iruya-node is running more than one daemon pod
Feb  9 13:45:49.488: INFO: Number of nodes with available pods: 0
Feb  9 13:45:49.488: INFO: Node iruya-node is running more than one daemon pod
Feb  9 13:45:50.030: INFO: Number of nodes with available pods: 0
Feb  9 13:45:50.030: INFO: Node iruya-node is running more than one daemon pod
Feb  9 13:45:51.039: INFO: Number of nodes with available pods: 0
Feb  9 13:45:51.039: INFO: Node iruya-node is running more than one daemon pod
Feb  9 13:45:53.804: INFO: Number of nodes with available pods: 0
Feb  9 13:45:53.804: INFO: Node iruya-node is running more than one daemon pod
Feb  9 13:45:54.039: INFO: Number of nodes with available pods: 0
Feb  9 13:45:54.039: INFO: Node iruya-node is running more than one daemon pod
Feb  9 13:45:55.030: INFO: Number of nodes with available pods: 0
Feb  9 13:45:55.030: INFO: Node iruya-node is running more than one daemon pod
Feb  9 13:45:56.032: INFO: Number of nodes with available pods: 0
Feb  9 13:45:56.033: INFO: Node iruya-node is running more than one daemon pod
Feb  9 13:45:57.031: INFO: Number of nodes with available pods: 2
Feb  9 13:45:57.031: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Feb  9 13:45:57.073: INFO: Wrong image for pod: daemon-set-2wd2m. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  9 13:45:57.073: INFO: Wrong image for pod: daemon-set-4w8gq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  9 13:45:58.086: INFO: Wrong image for pod: daemon-set-2wd2m. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  9 13:45:58.086: INFO: Wrong image for pod: daemon-set-4w8gq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  9 13:45:59.084: INFO: Wrong image for pod: daemon-set-2wd2m. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  9 13:45:59.085: INFO: Wrong image for pod: daemon-set-4w8gq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  9 13:46:00.084: INFO: Wrong image for pod: daemon-set-2wd2m. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  9 13:46:00.084: INFO: Wrong image for pod: daemon-set-4w8gq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  9 13:46:01.089: INFO: Wrong image for pod: daemon-set-2wd2m. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  9 13:46:01.089: INFO: Wrong image for pod: daemon-set-4w8gq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  9 13:46:02.085: INFO: Wrong image for pod: daemon-set-2wd2m. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  9 13:46:02.085: INFO: Wrong image for pod: daemon-set-4w8gq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  9 13:46:03.084: INFO: Wrong image for pod: daemon-set-2wd2m. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  9 13:46:03.084: INFO: Wrong image for pod: daemon-set-4w8gq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  9 13:46:03.084: INFO: Pod daemon-set-4w8gq is not available
Feb  9 13:46:04.082: INFO: Wrong image for pod: daemon-set-2wd2m. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  9 13:46:04.082: INFO: Wrong image for pod: daemon-set-4w8gq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  9 13:46:04.082: INFO: Pod daemon-set-4w8gq is not available
Feb  9 13:46:05.087: INFO: Wrong image for pod: daemon-set-2wd2m. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  9 13:46:05.087: INFO: Wrong image for pod: daemon-set-4w8gq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  9 13:46:05.087: INFO: Pod daemon-set-4w8gq is not available
Feb  9 13:46:06.085: INFO: Wrong image for pod: daemon-set-2wd2m. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  9 13:46:06.085: INFO: Wrong image for pod: daemon-set-4w8gq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  9 13:46:06.085: INFO: Pod daemon-set-4w8gq is not available
Feb  9 13:46:07.088: INFO: Wrong image for pod: daemon-set-2wd2m. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  9 13:46:07.088: INFO: Wrong image for pod: daemon-set-4w8gq. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  9 13:46:07.088: INFO: Pod daemon-set-4w8gq is not available
Feb  9 13:46:08.082: INFO: Wrong image for pod: daemon-set-2wd2m. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  9 13:46:08.082: INFO: Pod daemon-set-bwlzd is not available
Feb  9 13:46:09.087: INFO: Wrong image for pod: daemon-set-2wd2m. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  9 13:46:09.087: INFO: Pod daemon-set-bwlzd is not available
Feb  9 13:46:10.083: INFO: Wrong image for pod: daemon-set-2wd2m. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  9 13:46:10.083: INFO: Pod daemon-set-bwlzd is not available
Feb  9 13:46:11.103: INFO: Wrong image for pod: daemon-set-2wd2m. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  9 13:46:11.104: INFO: Pod daemon-set-bwlzd is not available
Feb  9 13:46:12.106: INFO: Wrong image for pod: daemon-set-2wd2m. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  9 13:46:12.106: INFO: Pod daemon-set-bwlzd is not available
Feb  9 13:46:13.083: INFO: Wrong image for pod: daemon-set-2wd2m. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  9 13:46:13.084: INFO: Pod daemon-set-bwlzd is not available
Feb  9 13:46:14.297: INFO: Wrong image for pod: daemon-set-2wd2m. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  9 13:46:14.297: INFO: Pod daemon-set-bwlzd is not available
Feb  9 13:46:15.086: INFO: Wrong image for pod: daemon-set-2wd2m. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  9 13:46:15.086: INFO: Pod daemon-set-bwlzd is not available
Feb  9 13:46:16.085: INFO: Wrong image for pod: daemon-set-2wd2m. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  9 13:46:17.093: INFO: Wrong image for pod: daemon-set-2wd2m. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  9 13:46:18.089: INFO: Wrong image for pod: daemon-set-2wd2m. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  9 13:46:19.084: INFO: Wrong image for pod: daemon-set-2wd2m. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  9 13:46:20.084: INFO: Wrong image for pod: daemon-set-2wd2m. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Feb  9 13:46:20.084: INFO: Pod daemon-set-2wd2m is not available
Feb  9 13:46:21.118: INFO: Pod daemon-set-jktmc is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Feb  9 13:46:21.244: INFO: Number of nodes with available pods: 1
Feb  9 13:46:21.244: INFO: Node iruya-node is running more than one daemon pod
Feb  9 13:46:22.255: INFO: Number of nodes with available pods: 1
Feb  9 13:46:22.256: INFO: Node iruya-node is running more than one daemon pod
Feb  9 13:46:23.256: INFO: Number of nodes with available pods: 1
Feb  9 13:46:23.256: INFO: Node iruya-node is running more than one daemon pod
Feb  9 13:46:24.269: INFO: Number of nodes with available pods: 1
Feb  9 13:46:24.269: INFO: Node iruya-node is running more than one daemon pod
Feb  9 13:46:25.260: INFO: Number of nodes with available pods: 1
Feb  9 13:46:25.260: INFO: Node iruya-node is running more than one daemon pod
Feb  9 13:46:26.271: INFO: Number of nodes with available pods: 1
Feb  9 13:46:26.271: INFO: Node iruya-node is running more than one daemon pod
Feb  9 13:46:27.263: INFO: Number of nodes with available pods: 1
Feb  9 13:46:27.263: INFO: Node iruya-node is running more than one daemon pod
Feb  9 13:46:28.286: INFO: Number of nodes with available pods: 1
Feb  9 13:46:28.286: INFO: Node iruya-node is running more than one daemon pod
Feb  9 13:46:29.283: INFO: Number of nodes with available pods: 1
Feb  9 13:46:29.283: INFO: Node iruya-node is running more than one daemon pod
Feb  9 13:46:30.289: INFO: Number of nodes with available pods: 1
Feb  9 13:46:30.289: INFO: Node iruya-node is running more than one daemon pod
Feb  9 13:46:31.273: INFO: Number of nodes with available pods: 1
Feb  9 13:46:31.274: INFO: Node iruya-node is running more than one daemon pod
Feb  9 13:46:32.256: INFO: Number of nodes with available pods: 2
Feb  9 13:46:32.256: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7094, will wait for the garbage collector to delete the pods
Feb  9 13:46:32.358: INFO: Deleting DaemonSet.extensions daemon-set took: 31.069449ms
Feb  9 13:46:32.659: INFO: Terminating DaemonSet.extensions daemon-set pods took: 301.223183ms
Feb  9 13:46:47.966: INFO: Number of nodes with available pods: 0
Feb  9 13:46:47.966: INFO: Number of running nodes: 0, number of available pods: 0
Feb  9 13:46:47.970: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7094/daemonsets","resourceVersion":"23699763"},"items":null}

Feb  9 13:46:47.977: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7094/pods","resourceVersion":"23699763"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 13:46:48.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-7094" for this suite.
Feb  9 13:46:56.038: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 13:46:56.159: INFO: namespace daemonsets-7094 deletion completed in 8.145917563s

• [SLOW TEST:69.454 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 13:46:56.159: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210
STEP: creating the pod
Feb  9 13:46:56.264: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5382'
Feb  9 13:46:58.634: INFO: stderr: ""
Feb  9 13:46:58.635: INFO: stdout: "pod/pause created\n"
Feb  9 13:46:58.635: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Feb  9 13:46:58.635: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-5382" to be "running and ready"
Feb  9 13:46:58.730: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 95.021879ms
Feb  9 13:47:00.738: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103210546s
Feb  9 13:47:02.747: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.11184584s
Feb  9 13:47:04.757: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.122188064s
Feb  9 13:47:06.763: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 8.128197992s
Feb  9 13:47:06.763: INFO: Pod "pause" satisfied condition "running and ready"
Feb  9 13:47:06.763: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: adding the label testing-label with value testing-label-value to a pod
Feb  9 13:47:06.764: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-5382'
Feb  9 13:47:06.893: INFO: stderr: ""
Feb  9 13:47:06.894: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Feb  9 13:47:06.894: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-5382'
Feb  9 13:47:07.006: INFO: stderr: ""
Feb  9 13:47:07.006: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          9s    testing-label-value\n"
STEP: removing the label testing-label of a pod
Feb  9 13:47:07.007: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-5382'
Feb  9 13:47:07.125: INFO: stderr: ""
Feb  9 13:47:07.125: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Feb  9 13:47:07.126: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-5382'
Feb  9 13:47:07.202: INFO: stderr: ""
Feb  9 13:47:07.203: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          9s    \n"
[AfterEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217
STEP: using delete to clean up resources
Feb  9 13:47:07.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5382'
Feb  9 13:47:07.427: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  9 13:47:07.427: INFO: stdout: "pod \"pause\" force deleted\n"
Feb  9 13:47:07.428: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-5382'
Feb  9 13:47:07.540: INFO: stderr: "No resources found.\n"
Feb  9 13:47:07.540: INFO: stdout: ""
Feb  9 13:47:07.540: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-5382 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb  9 13:47:07.631: INFO: stderr: ""
Feb  9 13:47:07.631: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 13:47:07.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5382" for this suite.
Feb  9 13:47:13.673: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 13:47:13.795: INFO: namespace kubectl-5382 deletion completed in 6.155577916s

• [SLOW TEST:17.636 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 13:47:13.796: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-f8de22c4-c86c-4858-83fc-f540ab396ca5 in namespace container-probe-5925
Feb  9 13:47:24.037: INFO: Started pod liveness-f8de22c4-c86c-4858-83fc-f540ab396ca5 in namespace container-probe-5925
STEP: checking the pod's current state and verifying that restartCount is present
Feb  9 13:47:24.047: INFO: Initial restart count of pod liveness-f8de22c4-c86c-4858-83fc-f540ab396ca5 is 0
Feb  9 13:47:40.125: INFO: Restart count of pod container-probe-5925/liveness-f8de22c4-c86c-4858-83fc-f540ab396ca5 is now 1 (16.078455361s elapsed)
Feb  9 13:48:00.225: INFO: Restart count of pod container-probe-5925/liveness-f8de22c4-c86c-4858-83fc-f540ab396ca5 is now 2 (36.178143222s elapsed)
Feb  9 13:48:21.214: INFO: Restart count of pod container-probe-5925/liveness-f8de22c4-c86c-4858-83fc-f540ab396ca5 is now 3 (57.167493288s elapsed)
Feb  9 13:48:39.756: INFO: Restart count of pod container-probe-5925/liveness-f8de22c4-c86c-4858-83fc-f540ab396ca5 is now 4 (1m15.709235438s elapsed)
Feb  9 13:49:46.263: INFO: Restart count of pod container-probe-5925/liveness-f8de22c4-c86c-4858-83fc-f540ab396ca5 is now 5 (2m22.216297536s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 13:49:46.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-5925" for this suite.
Feb  9 13:49:52.476: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 13:49:52.645: INFO: namespace container-probe-5925 deletion completed in 6.23091139s

• [SLOW TEST:158.850 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 13:49:52.646: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Feb  9 13:49:52.771: INFO: Waiting up to 5m0s for pod "pod-989b3770-cc7a-4506-bf92-6c08959c92c0" in namespace "emptydir-7764" to be "success or failure"
Feb  9 13:49:52.779: INFO: Pod "pod-989b3770-cc7a-4506-bf92-6c08959c92c0": Phase="Pending", Reason="", readiness=false. Elapsed: 7.510733ms
Feb  9 13:49:54.788: INFO: Pod "pod-989b3770-cc7a-4506-bf92-6c08959c92c0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017065298s
Feb  9 13:49:56.818: INFO: Pod "pod-989b3770-cc7a-4506-bf92-6c08959c92c0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047004478s
Feb  9 13:49:58.831: INFO: Pod "pod-989b3770-cc7a-4506-bf92-6c08959c92c0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05934146s
Feb  9 13:50:00.875: INFO: Pod "pod-989b3770-cc7a-4506-bf92-6c08959c92c0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.103166059s
Feb  9 13:50:02.880: INFO: Pod "pod-989b3770-cc7a-4506-bf92-6c08959c92c0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.108644938s
STEP: Saw pod success
Feb  9 13:50:02.880: INFO: Pod "pod-989b3770-cc7a-4506-bf92-6c08959c92c0" satisfied condition "success or failure"
Feb  9 13:50:02.883: INFO: Trying to get logs from node iruya-node pod pod-989b3770-cc7a-4506-bf92-6c08959c92c0 container test-container: 
STEP: delete the pod
Feb  9 13:50:03.013: INFO: Waiting for pod pod-989b3770-cc7a-4506-bf92-6c08959c92c0 to disappear
Feb  9 13:50:03.022: INFO: Pod pod-989b3770-cc7a-4506-bf92-6c08959c92c0 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 13:50:03.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7764" for this suite.
Feb  9 13:50:09.051: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 13:50:09.206: INFO: namespace emptydir-7764 deletion completed in 6.179406093s

• [SLOW TEST:16.560 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 13:50:09.206: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Feb  9 13:50:09.325: INFO: Waiting up to 5m0s for pod "pod-bf191a92-8e0a-422e-82e8-11c1dd2eb0fb" in namespace "emptydir-8931" to be "success or failure"
Feb  9 13:50:09.347: INFO: Pod "pod-bf191a92-8e0a-422e-82e8-11c1dd2eb0fb": Phase="Pending", Reason="", readiness=false. Elapsed: 20.453856ms
Feb  9 13:50:11.359: INFO: Pod "pod-bf191a92-8e0a-422e-82e8-11c1dd2eb0fb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032956388s
Feb  9 13:50:13.373: INFO: Pod "pod-bf191a92-8e0a-422e-82e8-11c1dd2eb0fb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04680332s
Feb  9 13:50:15.386: INFO: Pod "pod-bf191a92-8e0a-422e-82e8-11c1dd2eb0fb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.060115989s
Feb  9 13:50:17.395: INFO: Pod "pod-bf191a92-8e0a-422e-82e8-11c1dd2eb0fb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.069353013s
Feb  9 13:50:19.407: INFO: Pod "pod-bf191a92-8e0a-422e-82e8-11c1dd2eb0fb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.08117835s
STEP: Saw pod success
Feb  9 13:50:19.407: INFO: Pod "pod-bf191a92-8e0a-422e-82e8-11c1dd2eb0fb" satisfied condition "success or failure"
Feb  9 13:50:19.413: INFO: Trying to get logs from node iruya-node pod pod-bf191a92-8e0a-422e-82e8-11c1dd2eb0fb container test-container: 
STEP: delete the pod
Feb  9 13:50:19.527: INFO: Waiting for pod pod-bf191a92-8e0a-422e-82e8-11c1dd2eb0fb to disappear
Feb  9 13:50:19.562: INFO: Pod pod-bf191a92-8e0a-422e-82e8-11c1dd2eb0fb no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 13:50:19.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8931" for this suite.
Feb  9 13:50:25.591: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 13:50:25.751: INFO: namespace emptydir-8931 deletion completed in 6.180211123s

• [SLOW TEST:16.545 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 13:50:25.751: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-6456
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb  9 13:50:25.870: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb  9 13:51:04.085: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:pod-network-test-6456 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  9 13:51:04.085: INFO: >>> kubeConfig: /root/.kube/config
I0209 13:51:04.171554       8 log.go:172] (0xc000c17290) (0xc000d6cc80) Create stream
I0209 13:51:04.171685       8 log.go:172] (0xc000c17290) (0xc000d6cc80) Stream added, broadcasting: 1
I0209 13:51:04.179812       8 log.go:172] (0xc000c17290) Reply frame received for 1
I0209 13:51:04.179847       8 log.go:172] (0xc000c17290) (0xc0012d6c80) Create stream
I0209 13:51:04.179856       8 log.go:172] (0xc000c17290) (0xc0012d6c80) Stream added, broadcasting: 3
I0209 13:51:04.181602       8 log.go:172] (0xc000c17290) Reply frame received for 3
I0209 13:51:04.181628       8 log.go:172] (0xc000c17290) (0xc001a96000) Create stream
I0209 13:51:04.181640       8 log.go:172] (0xc000c17290) (0xc001a96000) Stream added, broadcasting: 5
I0209 13:51:04.182812       8 log.go:172] (0xc000c17290) Reply frame received for 5
I0209 13:51:04.337544       8 log.go:172] (0xc000c17290) Data frame received for 3
I0209 13:51:04.337610       8 log.go:172] (0xc0012d6c80) (3) Data frame handling
I0209 13:51:04.337624       8 log.go:172] (0xc0012d6c80) (3) Data frame sent
I0209 13:51:04.544491       8 log.go:172] (0xc000c17290) Data frame received for 1
I0209 13:51:04.544639       8 log.go:172] (0xc000c17290) (0xc0012d6c80) Stream removed, broadcasting: 3
I0209 13:51:04.544740       8 log.go:172] (0xc000d6cc80) (1) Data frame handling
I0209 13:51:04.544784       8 log.go:172] (0xc000d6cc80) (1) Data frame sent
I0209 13:51:04.544790       8 log.go:172] (0xc000c17290) (0xc000d6cc80) Stream removed, broadcasting: 1
I0209 13:51:04.545116       8 log.go:172] (0xc000c17290) (0xc001a96000) Stream removed, broadcasting: 5
I0209 13:51:04.545199       8 log.go:172] (0xc000c17290) (0xc000d6cc80) Stream removed, broadcasting: 1
I0209 13:51:04.545222       8 log.go:172] (0xc000c17290) (0xc0012d6c80) Stream removed, broadcasting: 3
I0209 13:51:04.545238       8 log.go:172] (0xc000c17290) (0xc001a96000) Stream removed, broadcasting: 5
Feb  9 13:51:04.545: INFO: Waiting for endpoints: map[]
I0209 13:51:04.546616       8 log.go:172] (0xc000c17290) Go away received
Feb  9 13:51:04.880: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.44.0.1&port=8081&tries=1'] Namespace:pod-network-test-6456 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  9 13:51:04.880: INFO: >>> kubeConfig: /root/.kube/config
I0209 13:51:04.952072       8 log.go:172] (0xc000c17ef0) (0xc000d6d540) Create stream
I0209 13:51:04.952166       8 log.go:172] (0xc000c17ef0) (0xc000d6d540) Stream added, broadcasting: 1
I0209 13:51:04.961075       8 log.go:172] (0xc000c17ef0) Reply frame received for 1
I0209 13:51:04.961163       8 log.go:172] (0xc000c17ef0) (0xc0012d6fa0) Create stream
I0209 13:51:04.961178       8 log.go:172] (0xc000c17ef0) (0xc0012d6fa0) Stream added, broadcasting: 3
I0209 13:51:04.965560       8 log.go:172] (0xc000c17ef0) Reply frame received for 3
I0209 13:51:04.965630       8 log.go:172] (0xc000c17ef0) (0xc001150000) Create stream
I0209 13:51:04.965651       8 log.go:172] (0xc000c17ef0) (0xc001150000) Stream added, broadcasting: 5
I0209 13:51:04.968237       8 log.go:172] (0xc000c17ef0) Reply frame received for 5
I0209 13:51:05.146727       8 log.go:172] (0xc000c17ef0) Data frame received for 3
I0209 13:51:05.146784       8 log.go:172] (0xc0012d6fa0) (3) Data frame handling
I0209 13:51:05.146810       8 log.go:172] (0xc0012d6fa0) (3) Data frame sent
I0209 13:51:05.268651       8 log.go:172] (0xc000c17ef0) (0xc0012d6fa0) Stream removed, broadcasting: 3
I0209 13:51:05.269094       8 log.go:172] (0xc000c17ef0) Data frame received for 1
I0209 13:51:05.269225       8 log.go:172] (0xc000c17ef0) (0xc001150000) Stream removed, broadcasting: 5
I0209 13:51:05.269446       8 log.go:172] (0xc000d6d540) (1) Data frame handling
I0209 13:51:05.269501       8 log.go:172] (0xc000d6d540) (1) Data frame sent
I0209 13:51:05.269530       8 log.go:172] (0xc000c17ef0) (0xc000d6d540) Stream removed, broadcasting: 1
I0209 13:51:05.269571       8 log.go:172] (0xc000c17ef0) Go away received
I0209 13:51:05.270522       8 log.go:172] (0xc000c17ef0) (0xc000d6d540) Stream removed, broadcasting: 1
I0209 13:51:05.270589       8 log.go:172] (0xc000c17ef0) (0xc0012d6fa0) Stream removed, broadcasting: 3
I0209 13:51:05.270598       8 log.go:172] (0xc000c17ef0) (0xc001150000) Stream removed, broadcasting: 5
Feb  9 13:51:05.270: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 13:51:05.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-6456" for this suite.
Feb  9 13:51:29.370: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 13:51:29.458: INFO: namespace pod-network-test-6456 deletion completed in 24.172323093s

• [SLOW TEST:63.707 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 13:51:29.459: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  9 13:51:29.565: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d9419abe-1e9f-4f58-b773-14165562d55d" in namespace "projected-4746" to be "success or failure"
Feb  9 13:51:29.601: INFO: Pod "downwardapi-volume-d9419abe-1e9f-4f58-b773-14165562d55d": Phase="Pending", Reason="", readiness=false. Elapsed: 35.959547ms
Feb  9 13:51:31.614: INFO: Pod "downwardapi-volume-d9419abe-1e9f-4f58-b773-14165562d55d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048681425s
Feb  9 13:51:33.633: INFO: Pod "downwardapi-volume-d9419abe-1e9f-4f58-b773-14165562d55d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067732827s
Feb  9 13:51:35.643: INFO: Pod "downwardapi-volume-d9419abe-1e9f-4f58-b773-14165562d55d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.078011463s
Feb  9 13:51:37.657: INFO: Pod "downwardapi-volume-d9419abe-1e9f-4f58-b773-14165562d55d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.091918766s
Feb  9 13:51:39.673: INFO: Pod "downwardapi-volume-d9419abe-1e9f-4f58-b773-14165562d55d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.108196874s
STEP: Saw pod success
Feb  9 13:51:39.674: INFO: Pod "downwardapi-volume-d9419abe-1e9f-4f58-b773-14165562d55d" satisfied condition "success or failure"
Feb  9 13:51:39.678: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-d9419abe-1e9f-4f58-b773-14165562d55d container client-container: 
STEP: delete the pod
Feb  9 13:51:40.045: INFO: Waiting for pod downwardapi-volume-d9419abe-1e9f-4f58-b773-14165562d55d to disappear
Feb  9 13:51:40.058: INFO: Pod downwardapi-volume-d9419abe-1e9f-4f58-b773-14165562d55d no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 13:51:40.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4746" for this suite.
Feb  9 13:51:46.134: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 13:51:46.228: INFO: namespace projected-4746 deletion completed in 6.160549449s

• [SLOW TEST:16.770 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 13:51:46.229: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  9 13:51:46.304: INFO: Waiting up to 5m0s for pod "downwardapi-volume-86d74323-76bd-4b77-b351-63d8a3c12d5b" in namespace "downward-api-8488" to be "success or failure"
Feb  9 13:51:46.326: INFO: Pod "downwardapi-volume-86d74323-76bd-4b77-b351-63d8a3c12d5b": Phase="Pending", Reason="", readiness=false. Elapsed: 21.147891ms
Feb  9 13:51:48.339: INFO: Pod "downwardapi-volume-86d74323-76bd-4b77-b351-63d8a3c12d5b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034555896s
Feb  9 13:51:50.352: INFO: Pod "downwardapi-volume-86d74323-76bd-4b77-b351-63d8a3c12d5b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047422423s
Feb  9 13:51:52.364: INFO: Pod "downwardapi-volume-86d74323-76bd-4b77-b351-63d8a3c12d5b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059381556s
Feb  9 13:51:54.372: INFO: Pod "downwardapi-volume-86d74323-76bd-4b77-b351-63d8a3c12d5b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.067985282s
Feb  9 13:51:56.380: INFO: Pod "downwardapi-volume-86d74323-76bd-4b77-b351-63d8a3c12d5b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.075337624s
STEP: Saw pod success
Feb  9 13:51:56.380: INFO: Pod "downwardapi-volume-86d74323-76bd-4b77-b351-63d8a3c12d5b" satisfied condition "success or failure"
Feb  9 13:51:56.384: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-86d74323-76bd-4b77-b351-63d8a3c12d5b container client-container: 
STEP: delete the pod
Feb  9 13:51:56.507: INFO: Waiting for pod downwardapi-volume-86d74323-76bd-4b77-b351-63d8a3c12d5b to disappear
Feb  9 13:51:56.517: INFO: Pod downwardapi-volume-86d74323-76bd-4b77-b351-63d8a3c12d5b no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 13:51:56.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8488" for this suite.
Feb  9 13:52:02.558: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 13:52:02.654: INFO: namespace downward-api-8488 deletion completed in 6.126318895s

• [SLOW TEST:16.426 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 13:52:02.655: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-8e5b1ec5-748c-42da-ada1-a6fed588b6fc
STEP: Creating a pod to test consume secrets
Feb  9 13:52:02.767: INFO: Waiting up to 5m0s for pod "pod-secrets-5ead8648-f841-4c3f-9994-b8f00a0418b9" in namespace "secrets-8826" to be "success or failure"
Feb  9 13:52:02.777: INFO: Pod "pod-secrets-5ead8648-f841-4c3f-9994-b8f00a0418b9": Phase="Pending", Reason="", readiness=false. Elapsed: 9.532435ms
Feb  9 13:52:04.791: INFO: Pod "pod-secrets-5ead8648-f841-4c3f-9994-b8f00a0418b9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023936972s
Feb  9 13:52:06.797: INFO: Pod "pod-secrets-5ead8648-f841-4c3f-9994-b8f00a0418b9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030502954s
Feb  9 13:52:08.809: INFO: Pod "pod-secrets-5ead8648-f841-4c3f-9994-b8f00a0418b9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042039361s
Feb  9 13:52:10.822: INFO: Pod "pod-secrets-5ead8648-f841-4c3f-9994-b8f00a0418b9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.054556208s
STEP: Saw pod success
Feb  9 13:52:10.822: INFO: Pod "pod-secrets-5ead8648-f841-4c3f-9994-b8f00a0418b9" satisfied condition "success or failure"
Feb  9 13:52:10.827: INFO: Trying to get logs from node iruya-node pod pod-secrets-5ead8648-f841-4c3f-9994-b8f00a0418b9 container secret-volume-test: 
STEP: delete the pod
Feb  9 13:52:10.893: INFO: Waiting for pod pod-secrets-5ead8648-f841-4c3f-9994-b8f00a0418b9 to disappear
Feb  9 13:52:10.902: INFO: Pod pod-secrets-5ead8648-f841-4c3f-9994-b8f00a0418b9 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 13:52:10.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8826" for this suite.
Feb  9 13:52:16.942: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 13:52:17.048: INFO: namespace secrets-8826 deletion completed in 6.142154764s

• [SLOW TEST:14.394 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 13:52:17.050: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Feb  9 13:52:17.148: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 13:52:32.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-5503" for this suite.
Feb  9 13:52:38.294: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 13:52:38.424: INFO: namespace init-container-5503 deletion completed in 6.195888081s

• [SLOW TEST:21.375 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 13:52:38.425: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-d62a00ee-7622-4cb1-b43d-bbc2fbc17ed9 in namespace container-probe-1031
Feb  9 13:52:48.601: INFO: Started pod liveness-d62a00ee-7622-4cb1-b43d-bbc2fbc17ed9 in namespace container-probe-1031
STEP: checking the pod's current state and verifying that restartCount is present
Feb  9 13:52:48.605: INFO: Initial restart count of pod liveness-d62a00ee-7622-4cb1-b43d-bbc2fbc17ed9 is 0
Feb  9 13:53:12.761: INFO: Restart count of pod container-probe-1031/liveness-d62a00ee-7622-4cb1-b43d-bbc2fbc17ed9 is now 1 (24.155741638s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 13:53:12.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1031" for this suite.
Feb  9 13:53:18.898: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 13:53:19.009: INFO: namespace container-probe-1031 deletion completed in 6.215309806s

• [SLOW TEST:40.584 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 13:53:19.009: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Feb  9 13:53:19.860: INFO: Pod name wrapped-volume-race-bd7c61f1-6dcb-4032-b8d7-7e2e08e66bb8: Found 0 pods out of 5
Feb  9 13:53:24.877: INFO: Pod name wrapped-volume-race-bd7c61f1-6dcb-4032-b8d7-7e2e08e66bb8: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-bd7c61f1-6dcb-4032-b8d7-7e2e08e66bb8 in namespace emptydir-wrapper-9024, will wait for the garbage collector to delete the pods
Feb  9 13:53:55.063: INFO: Deleting ReplicationController wrapped-volume-race-bd7c61f1-6dcb-4032-b8d7-7e2e08e66bb8 took: 32.196874ms
Feb  9 13:53:55.564: INFO: Terminating ReplicationController wrapped-volume-race-bd7c61f1-6dcb-4032-b8d7-7e2e08e66bb8 pods took: 500.482128ms
STEP: Creating RC which spawns configmap-volume pods
Feb  9 13:54:38.755: INFO: Pod name wrapped-volume-race-351a8e44-75e1-4e00-9649-6f54e3244daa: Found 0 pods out of 5
Feb  9 13:54:43.790: INFO: Pod name wrapped-volume-race-351a8e44-75e1-4e00-9649-6f54e3244daa: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-351a8e44-75e1-4e00-9649-6f54e3244daa in namespace emptydir-wrapper-9024, will wait for the garbage collector to delete the pods
Feb  9 13:55:19.884: INFO: Deleting ReplicationController wrapped-volume-race-351a8e44-75e1-4e00-9649-6f54e3244daa took: 11.559542ms
Feb  9 13:55:20.285: INFO: Terminating ReplicationController wrapped-volume-race-351a8e44-75e1-4e00-9649-6f54e3244daa pods took: 400.582614ms
STEP: Creating RC which spawns configmap-volume pods
Feb  9 13:56:03.045: INFO: Pod name wrapped-volume-race-27c2ee29-2008-49c8-be11-d16f1d89f8da: Found 0 pods out of 5
Feb  9 13:56:08.061: INFO: Pod name wrapped-volume-race-27c2ee29-2008-49c8-be11-d16f1d89f8da: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-27c2ee29-2008-49c8-be11-d16f1d89f8da in namespace emptydir-wrapper-9024, will wait for the garbage collector to delete the pods
Feb  9 13:56:42.227: INFO: Deleting ReplicationController wrapped-volume-race-27c2ee29-2008-49c8-be11-d16f1d89f8da took: 16.640403ms
Feb  9 13:56:42.527: INFO: Terminating ReplicationController wrapped-volume-race-27c2ee29-2008-49c8-be11-d16f1d89f8da pods took: 300.781304ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 13:57:37.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-9024" for this suite.
Feb  9 13:57:45.386: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 13:57:45.551: INFO: namespace emptydir-wrapper-9024 deletion completed in 8.236259887s

• [SLOW TEST:266.542 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 13:57:45.552: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Feb  9 13:57:45.669: INFO: Waiting up to 5m0s for pod "pod-a6e9c386-f19a-44b8-a4df-d0c8afe7974e" in namespace "emptydir-7370" to be "success or failure"
Feb  9 13:57:45.688: INFO: Pod "pod-a6e9c386-f19a-44b8-a4df-d0c8afe7974e": Phase="Pending", Reason="", readiness=false. Elapsed: 18.539288ms
Feb  9 13:57:47.700: INFO: Pod "pod-a6e9c386-f19a-44b8-a4df-d0c8afe7974e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030983857s
Feb  9 13:57:49.710: INFO: Pod "pod-a6e9c386-f19a-44b8-a4df-d0c8afe7974e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040923381s
Feb  9 13:57:51.720: INFO: Pod "pod-a6e9c386-f19a-44b8-a4df-d0c8afe7974e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050721095s
Feb  9 13:57:53.734: INFO: Pod "pod-a6e9c386-f19a-44b8-a4df-d0c8afe7974e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.064297503s
Feb  9 13:57:55.751: INFO: Pod "pod-a6e9c386-f19a-44b8-a4df-d0c8afe7974e": Phase="Pending", Reason="", readiness=false. Elapsed: 10.081718925s
Feb  9 13:57:57.760: INFO: Pod "pod-a6e9c386-f19a-44b8-a4df-d0c8afe7974e": Phase="Pending", Reason="", readiness=false. Elapsed: 12.090286361s
Feb  9 13:57:59.767: INFO: Pod "pod-a6e9c386-f19a-44b8-a4df-d0c8afe7974e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.097179546s
STEP: Saw pod success
Feb  9 13:57:59.767: INFO: Pod "pod-a6e9c386-f19a-44b8-a4df-d0c8afe7974e" satisfied condition "success or failure"
Feb  9 13:57:59.770: INFO: Trying to get logs from node iruya-node pod pod-a6e9c386-f19a-44b8-a4df-d0c8afe7974e container test-container: 
STEP: delete the pod
Feb  9 13:57:59.966: INFO: Waiting for pod pod-a6e9c386-f19a-44b8-a4df-d0c8afe7974e to disappear
Feb  9 13:57:59.975: INFO: Pod pod-a6e9c386-f19a-44b8-a4df-d0c8afe7974e no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 13:57:59.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7370" for this suite.
Feb  9 13:58:06.019: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 13:58:06.148: INFO: namespace emptydir-7370 deletion completed in 6.151128098s

• [SLOW TEST:20.596 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 13:58:06.148: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-downwardapi-z5tc
STEP: Creating a pod to test atomic-volume-subpath
Feb  9 13:58:06.280: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-z5tc" in namespace "subpath-2797" to be "success or failure"
Feb  9 13:58:06.299: INFO: Pod "pod-subpath-test-downwardapi-z5tc": Phase="Pending", Reason="", readiness=false. Elapsed: 18.466952ms
Feb  9 13:58:08.311: INFO: Pod "pod-subpath-test-downwardapi-z5tc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030593489s
Feb  9 13:58:10.321: INFO: Pod "pod-subpath-test-downwardapi-z5tc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040922483s
Feb  9 13:58:12.330: INFO: Pod "pod-subpath-test-downwardapi-z5tc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049918836s
Feb  9 13:58:14.345: INFO: Pod "pod-subpath-test-downwardapi-z5tc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.064551452s
Feb  9 13:58:16.357: INFO: Pod "pod-subpath-test-downwardapi-z5tc": Phase="Running", Reason="", readiness=true. Elapsed: 10.076201916s
Feb  9 13:58:18.374: INFO: Pod "pod-subpath-test-downwardapi-z5tc": Phase="Running", Reason="", readiness=true. Elapsed: 12.093925275s
Feb  9 13:58:20.383: INFO: Pod "pod-subpath-test-downwardapi-z5tc": Phase="Running", Reason="", readiness=true. Elapsed: 14.102157035s
Feb  9 13:58:22.396: INFO: Pod "pod-subpath-test-downwardapi-z5tc": Phase="Running", Reason="", readiness=true. Elapsed: 16.115783731s
Feb  9 13:58:24.406: INFO: Pod "pod-subpath-test-downwardapi-z5tc": Phase="Running", Reason="", readiness=true. Elapsed: 18.125753774s
Feb  9 13:58:26.417: INFO: Pod "pod-subpath-test-downwardapi-z5tc": Phase="Running", Reason="", readiness=true. Elapsed: 20.136165606s
Feb  9 13:58:28.424: INFO: Pod "pod-subpath-test-downwardapi-z5tc": Phase="Running", Reason="", readiness=true. Elapsed: 22.143427656s
Feb  9 13:58:30.434: INFO: Pod "pod-subpath-test-downwardapi-z5tc": Phase="Running", Reason="", readiness=true. Elapsed: 24.153653149s
Feb  9 13:58:32.444: INFO: Pod "pod-subpath-test-downwardapi-z5tc": Phase="Running", Reason="", readiness=true. Elapsed: 26.164029218s
Feb  9 13:58:34.479: INFO: Pod "pod-subpath-test-downwardapi-z5tc": Phase="Running", Reason="", readiness=true. Elapsed: 28.198629918s
Feb  9 13:58:36.496: INFO: Pod "pod-subpath-test-downwardapi-z5tc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.21567629s
STEP: Saw pod success
Feb  9 13:58:36.496: INFO: Pod "pod-subpath-test-downwardapi-z5tc" satisfied condition "success or failure"
Feb  9 13:58:36.504: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-downwardapi-z5tc container test-container-subpath-downwardapi-z5tc: 
STEP: delete the pod
Feb  9 13:58:36.605: INFO: Waiting for pod pod-subpath-test-downwardapi-z5tc to disappear
Feb  9 13:58:36.619: INFO: Pod pod-subpath-test-downwardapi-z5tc no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-z5tc
Feb  9 13:58:36.619: INFO: Deleting pod "pod-subpath-test-downwardapi-z5tc" in namespace "subpath-2797"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 13:58:36.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-2797" for this suite.
Feb  9 13:58:42.673: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 13:58:42.780: INFO: namespace subpath-2797 deletion completed in 6.131116509s

• [SLOW TEST:36.631 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 13:58:42.780: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Feb  9 13:58:51.988: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 13:58:53.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-7648" for this suite.
Feb  9 13:59:33.062: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 13:59:33.177: INFO: namespace replicaset-7648 deletion completed in 40.1400259s

• [SLOW TEST:50.397 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 13:59:33.178: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  9 13:59:33.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-3087'
Feb  9 13:59:35.692: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb  9 13:59:35.693: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426
Feb  9 13:59:37.790: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-3087'
Feb  9 13:59:38.016: INFO: stderr: ""
Feb  9 13:59:38.016: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 13:59:38.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3087" for this suite.
Feb  9 14:00:00.048: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 14:00:00.252: INFO: namespace kubectl-3087 deletion completed in 22.228618971s

• [SLOW TEST:27.074 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 14:00:00.253: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Feb  9 14:00:10.967: INFO: Successfully updated pod "annotationupdatecc481f62-44a1-4aa8-b819-d31212300cc5"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 14:00:13.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2976" for this suite.
Feb  9 14:00:35.254: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 14:00:35.393: INFO: namespace downward-api-2976 deletion completed in 22.177646452s

• [SLOW TEST:35.140 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 14:00:35.394: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  9 14:00:35.496: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8ac088d0-2687-490c-8eee-079c758e7d49" in namespace "downward-api-8086" to be "success or failure"
Feb  9 14:00:35.503: INFO: Pod "downwardapi-volume-8ac088d0-2687-490c-8eee-079c758e7d49": Phase="Pending", Reason="", readiness=false. Elapsed: 7.094292ms
Feb  9 14:00:37.513: INFO: Pod "downwardapi-volume-8ac088d0-2687-490c-8eee-079c758e7d49": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016550431s
Feb  9 14:00:39.520: INFO: Pod "downwardapi-volume-8ac088d0-2687-490c-8eee-079c758e7d49": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024252293s
Feb  9 14:00:41.528: INFO: Pod "downwardapi-volume-8ac088d0-2687-490c-8eee-079c758e7d49": Phase="Pending", Reason="", readiness=false. Elapsed: 6.031856564s
Feb  9 14:00:43.537: INFO: Pod "downwardapi-volume-8ac088d0-2687-490c-8eee-079c758e7d49": Phase="Pending", Reason="", readiness=false. Elapsed: 8.040856897s
Feb  9 14:00:45.546: INFO: Pod "downwardapi-volume-8ac088d0-2687-490c-8eee-079c758e7d49": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.049734327s
STEP: Saw pod success
Feb  9 14:00:45.546: INFO: Pod "downwardapi-volume-8ac088d0-2687-490c-8eee-079c758e7d49" satisfied condition "success or failure"
Feb  9 14:00:45.555: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-8ac088d0-2687-490c-8eee-079c758e7d49 container client-container: 
STEP: delete the pod
Feb  9 14:00:45.713: INFO: Waiting for pod downwardapi-volume-8ac088d0-2687-490c-8eee-079c758e7d49 to disappear
Feb  9 14:00:45.724: INFO: Pod downwardapi-volume-8ac088d0-2687-490c-8eee-079c758e7d49 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 14:00:45.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8086" for this suite.
Feb  9 14:00:51.762: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 14:00:51.963: INFO: namespace downward-api-8086 deletion completed in 6.22970335s

• [SLOW TEST:16.569 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 14:00:51.964: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 14:01:52.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8868" for this suite.
Feb  9 14:02:14.181: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 14:02:14.313: INFO: namespace container-probe-8868 deletion completed in 22.164345847s

• [SLOW TEST:82.350 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 14:02:14.314: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-secret-l2pp
STEP: Creating a pod to test atomic-volume-subpath
Feb  9 14:02:14.453: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-l2pp" in namespace "subpath-3435" to be "success or failure"
Feb  9 14:02:14.462: INFO: Pod "pod-subpath-test-secret-l2pp": Phase="Pending", Reason="", readiness=false. Elapsed: 8.760162ms
Feb  9 14:02:16.476: INFO: Pod "pod-subpath-test-secret-l2pp": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023029682s
Feb  9 14:02:18.488: INFO: Pod "pod-subpath-test-secret-l2pp": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034905167s
Feb  9 14:02:20.505: INFO: Pod "pod-subpath-test-secret-l2pp": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05162055s
Feb  9 14:02:22.519: INFO: Pod "pod-subpath-test-secret-l2pp": Phase="Running", Reason="", readiness=true. Elapsed: 8.065904471s
Feb  9 14:02:24.537: INFO: Pod "pod-subpath-test-secret-l2pp": Phase="Running", Reason="", readiness=true. Elapsed: 10.084184516s
Feb  9 14:02:26.559: INFO: Pod "pod-subpath-test-secret-l2pp": Phase="Running", Reason="", readiness=true. Elapsed: 12.105745488s
Feb  9 14:02:28.581: INFO: Pod "pod-subpath-test-secret-l2pp": Phase="Running", Reason="", readiness=true. Elapsed: 14.128215551s
Feb  9 14:02:30.600: INFO: Pod "pod-subpath-test-secret-l2pp": Phase="Running", Reason="", readiness=true. Elapsed: 16.146523989s
Feb  9 14:02:32.618: INFO: Pod "pod-subpath-test-secret-l2pp": Phase="Running", Reason="", readiness=true. Elapsed: 18.165052078s
Feb  9 14:02:34.628: INFO: Pod "pod-subpath-test-secret-l2pp": Phase="Running", Reason="", readiness=true. Elapsed: 20.174474497s
Feb  9 14:02:36.636: INFO: Pod "pod-subpath-test-secret-l2pp": Phase="Running", Reason="", readiness=true. Elapsed: 22.182774821s
Feb  9 14:02:38.651: INFO: Pod "pod-subpath-test-secret-l2pp": Phase="Running", Reason="", readiness=true. Elapsed: 24.198125149s
Feb  9 14:02:40.669: INFO: Pod "pod-subpath-test-secret-l2pp": Phase="Running", Reason="", readiness=true. Elapsed: 26.215885999s
Feb  9 14:02:42.682: INFO: Pod "pod-subpath-test-secret-l2pp": Phase="Running", Reason="", readiness=true. Elapsed: 28.228957291s
Feb  9 14:02:44.701: INFO: Pod "pod-subpath-test-secret-l2pp": Phase="Running", Reason="", readiness=true. Elapsed: 30.247922314s
Feb  9 14:02:46.709: INFO: Pod "pod-subpath-test-secret-l2pp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.255775071s
STEP: Saw pod success
Feb  9 14:02:46.709: INFO: Pod "pod-subpath-test-secret-l2pp" satisfied condition "success or failure"
Feb  9 14:02:46.712: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-secret-l2pp container test-container-subpath-secret-l2pp: 
STEP: delete the pod
Feb  9 14:02:46.949: INFO: Waiting for pod pod-subpath-test-secret-l2pp to disappear
Feb  9 14:02:46.966: INFO: Pod pod-subpath-test-secret-l2pp no longer exists
STEP: Deleting pod pod-subpath-test-secret-l2pp
Feb  9 14:02:46.966: INFO: Deleting pod "pod-subpath-test-secret-l2pp" in namespace "subpath-3435"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 14:02:46.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-3435" for this suite.
Feb  9 14:02:53.022: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 14:02:53.156: INFO: namespace subpath-3435 deletion completed in 6.170857424s

• [SLOW TEST:38.841 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 14:02:53.156: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Feb  9 14:02:53.296: INFO: Waiting up to 5m0s for pod "downward-api-125a575d-dc11-4219-ae14-fb98fbf71133" in namespace "downward-api-2643" to be "success or failure"
Feb  9 14:02:53.304: INFO: Pod "downward-api-125a575d-dc11-4219-ae14-fb98fbf71133": Phase="Pending", Reason="", readiness=false. Elapsed: 7.315795ms
Feb  9 14:02:55.312: INFO: Pod "downward-api-125a575d-dc11-4219-ae14-fb98fbf71133": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015477712s
Feb  9 14:02:57.320: INFO: Pod "downward-api-125a575d-dc11-4219-ae14-fb98fbf71133": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023480403s
Feb  9 14:02:59.329: INFO: Pod "downward-api-125a575d-dc11-4219-ae14-fb98fbf71133": Phase="Pending", Reason="", readiness=false. Elapsed: 6.033139231s
Feb  9 14:03:01.338: INFO: Pod "downward-api-125a575d-dc11-4219-ae14-fb98fbf71133": Phase="Pending", Reason="", readiness=false. Elapsed: 8.041777802s
Feb  9 14:03:03.348: INFO: Pod "downward-api-125a575d-dc11-4219-ae14-fb98fbf71133": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.052301643s
STEP: Saw pod success
Feb  9 14:03:03.349: INFO: Pod "downward-api-125a575d-dc11-4219-ae14-fb98fbf71133" satisfied condition "success or failure"
Feb  9 14:03:03.355: INFO: Trying to get logs from node iruya-node pod downward-api-125a575d-dc11-4219-ae14-fb98fbf71133 container dapi-container: 
STEP: delete the pod
Feb  9 14:03:03.641: INFO: Waiting for pod downward-api-125a575d-dc11-4219-ae14-fb98fbf71133 to disappear
Feb  9 14:03:03.679: INFO: Pod downward-api-125a575d-dc11-4219-ae14-fb98fbf71133 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 14:03:03.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2643" for this suite.
Feb  9 14:03:09.715: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 14:03:09.877: INFO: namespace downward-api-2643 deletion completed in 6.191243496s

• [SLOW TEST:16.721 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 14:03:09.879: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-9907/configmap-test-ca84367a-6ff0-45fc-95da-b5c8f1efa6ae
STEP: Creating a pod to test consume configMaps
Feb  9 14:03:10.005: INFO: Waiting up to 5m0s for pod "pod-configmaps-e80af23b-4c32-4414-a910-19513146166e" in namespace "configmap-9907" to be "success or failure"
Feb  9 14:03:10.016: INFO: Pod "pod-configmaps-e80af23b-4c32-4414-a910-19513146166e": Phase="Pending", Reason="", readiness=false. Elapsed: 10.371647ms
Feb  9 14:03:12.024: INFO: Pod "pod-configmaps-e80af23b-4c32-4414-a910-19513146166e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018344509s
Feb  9 14:03:14.041: INFO: Pod "pod-configmaps-e80af23b-4c32-4414-a910-19513146166e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035188743s
Feb  9 14:03:16.054: INFO: Pod "pod-configmaps-e80af23b-4c32-4414-a910-19513146166e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049151415s
Feb  9 14:03:18.062: INFO: Pod "pod-configmaps-e80af23b-4c32-4414-a910-19513146166e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.057157939s
Feb  9 14:03:20.073: INFO: Pod "pod-configmaps-e80af23b-4c32-4414-a910-19513146166e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.068100224s
STEP: Saw pod success
Feb  9 14:03:20.074: INFO: Pod "pod-configmaps-e80af23b-4c32-4414-a910-19513146166e" satisfied condition "success or failure"
Feb  9 14:03:20.079: INFO: Trying to get logs from node iruya-node pod pod-configmaps-e80af23b-4c32-4414-a910-19513146166e container env-test: 
STEP: delete the pod
Feb  9 14:03:20.197: INFO: Waiting for pod pod-configmaps-e80af23b-4c32-4414-a910-19513146166e to disappear
Feb  9 14:03:20.332: INFO: Pod pod-configmaps-e80af23b-4c32-4414-a910-19513146166e no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 14:03:20.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9907" for this suite.
Feb  9 14:03:26.381: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 14:03:26.533: INFO: namespace configmap-9907 deletion completed in 6.181100386s

• [SLOW TEST:16.655 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 14:03:26.534: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-6383
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Feb  9 14:03:26.718: INFO: Found 0 stateful pods, waiting for 3
Feb  9 14:03:36.727: INFO: Found 2 stateful pods, waiting for 3
Feb  9 14:03:46.735: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  9 14:03:46.736: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  9 14:03:46.736: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb  9 14:03:56.731: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  9 14:03:56.732: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  9 14:03:56.732: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Feb  9 14:03:56.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6383 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  9 14:03:57.203: INFO: stderr: "I0209 14:03:56.926307    1658 log.go:172] (0xc000104dc0) (0xc00032a820) Create stream\nI0209 14:03:56.926367    1658 log.go:172] (0xc000104dc0) (0xc00032a820) Stream added, broadcasting: 1\nI0209 14:03:56.928829    1658 log.go:172] (0xc000104dc0) Reply frame received for 1\nI0209 14:03:56.928870    1658 log.go:172] (0xc000104dc0) (0xc00062a320) Create stream\nI0209 14:03:56.928887    1658 log.go:172] (0xc000104dc0) (0xc00062a320) Stream added, broadcasting: 3\nI0209 14:03:56.929860    1658 log.go:172] (0xc000104dc0) Reply frame received for 3\nI0209 14:03:56.929881    1658 log.go:172] (0xc000104dc0) (0xc00032a8c0) Create stream\nI0209 14:03:56.929887    1658 log.go:172] (0xc000104dc0) (0xc00032a8c0) Stream added, broadcasting: 5\nI0209 14:03:56.930933    1658 log.go:172] (0xc000104dc0) Reply frame received for 5\nI0209 14:03:57.022720    1658 log.go:172] (0xc000104dc0) Data frame received for 5\nI0209 14:03:57.022776    1658 log.go:172] (0xc00032a8c0) (5) Data frame handling\nI0209 14:03:57.022802    1658 log.go:172] (0xc00032a8c0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0209 14:03:57.098252    1658 log.go:172] (0xc000104dc0) Data frame received for 3\nI0209 14:03:57.098279    1658 log.go:172] (0xc00062a320) (3) Data frame handling\nI0209 14:03:57.098292    1658 log.go:172] (0xc00062a320) (3) Data frame sent\nI0209 14:03:57.196729    1658 log.go:172] (0xc000104dc0) (0xc00062a320) Stream removed, broadcasting: 3\nI0209 14:03:57.196798    1658 log.go:172] (0xc000104dc0) Data frame received for 1\nI0209 14:03:57.196815    1658 log.go:172] (0xc00032a820) (1) Data frame handling\nI0209 14:03:57.196825    1658 log.go:172] (0xc00032a820) (1) Data frame sent\nI0209 14:03:57.196835    1658 log.go:172] (0xc000104dc0) (0xc00032a8c0) Stream removed, broadcasting: 5\nI0209 14:03:57.196853    1658 log.go:172] (0xc000104dc0) (0xc00032a820) Stream removed, broadcasting: 1\nI0209 14:03:57.196865    1658 log.go:172] (0xc000104dc0) Go away received\nI0209 14:03:57.197196    1658 log.go:172] (0xc000104dc0) (0xc00032a820) Stream removed, broadcasting: 1\nI0209 14:03:57.197270    1658 log.go:172] (0xc000104dc0) (0xc00062a320) Stream removed, broadcasting: 3\nI0209 14:03:57.197284    1658 log.go:172] (0xc000104dc0) (0xc00032a8c0) Stream removed, broadcasting: 5\n"
Feb  9 14:03:57.204: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  9 14:03:57.204: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Feb  9 14:03:57.380: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Feb  9 14:04:07.472: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6383 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  9 14:04:07.784: INFO: stderr: "I0209 14:04:07.629864    1678 log.go:172] (0xc0008ce370) (0xc0002e06e0) Create stream\nI0209 14:04:07.629954    1678 log.go:172] (0xc0008ce370) (0xc0002e06e0) Stream added, broadcasting: 1\nI0209 14:04:07.632105    1678 log.go:172] (0xc0008ce370) Reply frame received for 1\nI0209 14:04:07.632140    1678 log.go:172] (0xc0008ce370) (0xc0009ec000) Create stream\nI0209 14:04:07.632158    1678 log.go:172] (0xc0008ce370) (0xc0009ec000) Stream added, broadcasting: 3\nI0209 14:04:07.633413    1678 log.go:172] (0xc0008ce370) Reply frame received for 3\nI0209 14:04:07.633431    1678 log.go:172] (0xc0008ce370) (0xc0006383c0) Create stream\nI0209 14:04:07.633439    1678 log.go:172] (0xc0008ce370) (0xc0006383c0) Stream added, broadcasting: 5\nI0209 14:04:07.634588    1678 log.go:172] (0xc0008ce370) Reply frame received for 5\nI0209 14:04:07.704796    1678 log.go:172] (0xc0008ce370) Data frame received for 5\nI0209 14:04:07.704846    1678 log.go:172] (0xc0006383c0) (5) Data frame handling\nI0209 14:04:07.704858    1678 log.go:172] (0xc0006383c0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0209 14:04:07.704885    1678 log.go:172] (0xc0008ce370) Data frame received for 3\nI0209 14:04:07.704891    1678 log.go:172] (0xc0009ec000) (3) Data frame handling\nI0209 14:04:07.704900    1678 log.go:172] (0xc0009ec000) (3) Data frame sent\nI0209 14:04:07.770887    1678 log.go:172] (0xc0008ce370) Data frame received for 1\nI0209 14:04:07.770917    1678 log.go:172] (0xc0002e06e0) (1) Data frame handling\nI0209 14:04:07.770926    1678 log.go:172] (0xc0002e06e0) (1) Data frame sent\nI0209 14:04:07.770940    1678 log.go:172] (0xc0008ce370) (0xc0002e06e0) Stream removed, broadcasting: 1\nI0209 14:04:07.774754    1678 log.go:172] (0xc0008ce370) (0xc0009ec000) Stream removed, broadcasting: 3\nI0209 14:04:07.775143    1678 log.go:172] (0xc0008ce370) (0xc0006383c0) Stream removed, broadcasting: 5\nI0209 14:04:07.775222    1678 log.go:172] (0xc0008ce370) (0xc0002e06e0) Stream removed, broadcasting: 1\nI0209 14:04:07.775264    1678 log.go:172] (0xc0008ce370) (0xc0009ec000) Stream removed, broadcasting: 3\nI0209 14:04:07.775291    1678 log.go:172] (0xc0008ce370) (0xc0006383c0) Stream removed, broadcasting: 5\nI0209 14:04:07.775564    1678 log.go:172] (0xc0008ce370) Go away received\n"
Feb  9 14:04:07.785: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  9 14:04:07.785: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  9 14:04:17.831: INFO: Waiting for StatefulSet statefulset-6383/ss2 to complete update
Feb  9 14:04:17.831: INFO: Waiting for Pod statefulset-6383/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  9 14:04:17.831: INFO: Waiting for Pod statefulset-6383/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  9 14:04:27.882: INFO: Waiting for StatefulSet statefulset-6383/ss2 to complete update
Feb  9 14:04:27.882: INFO: Waiting for Pod statefulset-6383/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  9 14:04:27.882: INFO: Waiting for Pod statefulset-6383/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  9 14:04:37.979: INFO: Waiting for StatefulSet statefulset-6383/ss2 to complete update
Feb  9 14:04:37.979: INFO: Waiting for Pod statefulset-6383/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  9 14:04:47.845: INFO: Waiting for StatefulSet statefulset-6383/ss2 to complete update
STEP: Rolling back to a previous revision
Feb  9 14:04:57.845: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6383 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  9 14:04:58.260: INFO: stderr: "I0209 14:04:58.058696    1698 log.go:172] (0xc000116f20) (0xc0005c2b40) Create stream\nI0209 14:04:58.058925    1698 log.go:172] (0xc000116f20) (0xc0005c2b40) Stream added, broadcasting: 1\nI0209 14:04:58.063465    1698 log.go:172] (0xc000116f20) Reply frame received for 1\nI0209 14:04:58.063500    1698 log.go:172] (0xc000116f20) (0xc0005c2be0) Create stream\nI0209 14:04:58.063512    1698 log.go:172] (0xc000116f20) (0xc0005c2be0) Stream added, broadcasting: 3\nI0209 14:04:58.065490    1698 log.go:172] (0xc000116f20) Reply frame received for 3\nI0209 14:04:58.065543    1698 log.go:172] (0xc000116f20) (0xc000a00000) Create stream\nI0209 14:04:58.065560    1698 log.go:172] (0xc000116f20) (0xc000a00000) Stream added, broadcasting: 5\nI0209 14:04:58.067407    1698 log.go:172] (0xc000116f20) Reply frame received for 5\nI0209 14:04:58.165886    1698 log.go:172] (0xc000116f20) Data frame received for 5\nI0209 14:04:58.165914    1698 log.go:172] (0xc000a00000) (5) Data frame handling\nI0209 14:04:58.165928    1698 log.go:172] (0xc000a00000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0209 14:04:58.194047    1698 log.go:172] (0xc000116f20) Data frame received for 3\nI0209 14:04:58.194089    1698 log.go:172] (0xc0005c2be0) (3) Data frame handling\nI0209 14:04:58.194111    1698 log.go:172] (0xc0005c2be0) (3) Data frame sent\nI0209 14:04:58.255455    1698 log.go:172] (0xc000116f20) (0xc000a00000) Stream removed, broadcasting: 5\nI0209 14:04:58.255566    1698 log.go:172] (0xc000116f20) Data frame received for 1\nI0209 14:04:58.255580    1698 log.go:172] (0xc0005c2b40) (1) Data frame handling\nI0209 14:04:58.255591    1698 log.go:172] (0xc0005c2b40) (1) Data frame sent\nI0209 14:04:58.255621    1698 log.go:172] (0xc000116f20) (0xc0005c2b40) Stream removed, broadcasting: 1\nI0209 14:04:58.255773    1698 log.go:172] (0xc000116f20) (0xc0005c2be0) Stream removed, broadcasting: 3\nI0209 14:04:58.255842    1698 log.go:172] (0xc000116f20) Go away received\nI0209 14:04:58.256040    1698 log.go:172] (0xc000116f20) (0xc0005c2b40) Stream removed, broadcasting: 1\nI0209 14:04:58.256067    1698 log.go:172] (0xc000116f20) (0xc0005c2be0) Stream removed, broadcasting: 3\nI0209 14:04:58.256073    1698 log.go:172] (0xc000116f20) (0xc000a00000) Stream removed, broadcasting: 5\n"
Feb  9 14:04:58.261: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  9 14:04:58.261: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  9 14:05:08.316: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Feb  9 14:05:18.417: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6383 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  9 14:05:18.868: INFO: stderr: "I0209 14:05:18.679483    1716 log.go:172] (0xc0008f6370) (0xc0007d8640) Create stream\nI0209 14:05:18.679688    1716 log.go:172] (0xc0008f6370) (0xc0007d8640) Stream added, broadcasting: 1\nI0209 14:05:18.683151    1716 log.go:172] (0xc0008f6370) Reply frame received for 1\nI0209 14:05:18.683201    1716 log.go:172] (0xc0008f6370) (0xc0007da000) Create stream\nI0209 14:05:18.683216    1716 log.go:172] (0xc0008f6370) (0xc0007da000) Stream added, broadcasting: 3\nI0209 14:05:18.685093    1716 log.go:172] (0xc0008f6370) Reply frame received for 3\nI0209 14:05:18.685119    1716 log.go:172] (0xc0008f6370) (0xc00077c140) Create stream\nI0209 14:05:18.685130    1716 log.go:172] (0xc0008f6370) (0xc00077c140) Stream added, broadcasting: 5\nI0209 14:05:18.686222    1716 log.go:172] (0xc0008f6370) Reply frame received for 5\nI0209 14:05:18.780988    1716 log.go:172] (0xc0008f6370) Data frame received for 3\nI0209 14:05:18.781108    1716 log.go:172] (0xc0007da000) (3) Data frame handling\nI0209 14:05:18.781120    1716 log.go:172] (0xc0007da000) (3) Data frame sent\nI0209 14:05:18.781156    1716 log.go:172] (0xc0008f6370) Data frame received for 5\nI0209 14:05:18.781161    1716 log.go:172] (0xc00077c140) (5) Data frame handling\nI0209 14:05:18.781165    1716 log.go:172] (0xc00077c140) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0209 14:05:18.862233    1716 log.go:172] (0xc0008f6370) Data frame received for 1\nI0209 14:05:18.862352    1716 log.go:172] (0xc0008f6370) (0xc0007da000) Stream removed, broadcasting: 3\nI0209 14:05:18.862426    1716 log.go:172] (0xc0007d8640) (1) Data frame handling\nI0209 14:05:18.862441    1716 log.go:172] (0xc0007d8640) (1) Data frame sent\nI0209 14:05:18.862592    1716 log.go:172] (0xc0008f6370) (0xc0007d8640) Stream removed, broadcasting: 1\nI0209 14:05:18.862683    1716 log.go:172] (0xc0008f6370) (0xc00077c140) Stream removed, broadcasting: 5\nI0209 14:05:18.862722    1716 log.go:172] (0xc0008f6370) Go away received\nI0209 14:05:18.863166    1716 log.go:172] (0xc0008f6370) (0xc0007d8640) Stream removed, broadcasting: 1\nI0209 14:05:18.863186    1716 log.go:172] (0xc0008f6370) (0xc0007da000) Stream removed, broadcasting: 3\nI0209 14:05:18.863199    1716 log.go:172] (0xc0008f6370) (0xc00077c140) Stream removed, broadcasting: 5\n"
Feb  9 14:05:18.868: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  9 14:05:18.868: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  9 14:05:28.908: INFO: Waiting for StatefulSet statefulset-6383/ss2 to complete update
Feb  9 14:05:28.908: INFO: Waiting for Pod statefulset-6383/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb  9 14:05:28.908: INFO: Waiting for Pod statefulset-6383/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb  9 14:05:38.921: INFO: Waiting for StatefulSet statefulset-6383/ss2 to complete update
Feb  9 14:05:38.921: INFO: Waiting for Pod statefulset-6383/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb  9 14:05:38.921: INFO: Waiting for Pod statefulset-6383/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb  9 14:05:48.920: INFO: Waiting for StatefulSet statefulset-6383/ss2 to complete update
Feb  9 14:05:48.920: INFO: Waiting for Pod statefulset-6383/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb  9 14:05:58.923: INFO: Waiting for StatefulSet statefulset-6383/ss2 to complete update
Feb  9 14:05:58.923: INFO: Waiting for Pod statefulset-6383/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Feb  9 14:06:08.928: INFO: Waiting for StatefulSet statefulset-6383/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Feb  9 14:06:18.927: INFO: Deleting all statefulset in ns statefulset-6383
Feb  9 14:06:18.931: INFO: Scaling statefulset ss2 to 0
Feb  9 14:06:48.973: INFO: Waiting for statefulset status.replicas updated to 0
Feb  9 14:06:48.978: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 14:06:49.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-6383" for this suite.
Feb  9 14:06:57.049: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 14:06:57.212: INFO: namespace statefulset-6383 deletion completed in 8.203723143s

• [SLOW TEST:210.678 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 14:06:57.213: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  9 14:06:57.292: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Feb  9 14:06:57.436: INFO: Pod name sample-pod: Found 0 pods out of 1
Feb  9 14:07:02.446: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb  9 14:07:06.463: INFO: Creating deployment "test-rolling-update-deployment"
Feb  9 14:07:06.477: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Feb  9 14:07:06.488: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Feb  9 14:07:08.506: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Feb  9 14:07:08.512: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716854026, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716854026, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716854026, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716854026, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  9 14:07:10.520: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716854026, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716854026, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716854026, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716854026, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  9 14:07:12.526: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716854026, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716854026, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716854026, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716854026, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  9 14:07:14.619: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716854026, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716854026, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716854026, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716854026, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  9 14:07:16.524: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Feb  9 14:07:16.540: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-9338,SelfLink:/apis/apps/v1/namespaces/deployment-9338/deployments/test-rolling-update-deployment,UID:373b0006-196f-483b-a546-60b41852282c,ResourceVersion:23703268,Generation:1,CreationTimestamp:2020-02-09 14:07:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-02-09 14:07:06 +0000 UTC 2020-02-09 14:07:06 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-02-09 14:07:16 +0000 UTC 2020-02-09 14:07:06 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Feb  9 14:07:16.546: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-9338,SelfLink:/apis/apps/v1/namespaces/deployment-9338/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:439cc4c5-aa2c-4728-87fc-a354fbf27539,ResourceVersion:23703257,Generation:1,CreationTimestamp:2020-02-09 14:07:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 373b0006-196f-483b-a546-60b41852282c 0xc0027d8fb7 0xc0027d8fb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Feb  9 14:07:16.546: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Feb  9 14:07:16.546: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-9338,SelfLink:/apis/apps/v1/namespaces/deployment-9338/replicasets/test-rolling-update-controller,UID:001bc7c6-6a53-4c29-b691-d3f8e9f16d05,ResourceVersion:23703267,Generation:2,CreationTimestamp:2020-02-09 14:06:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 373b0006-196f-483b-a546-60b41852282c 0xc0027d8ecf 0xc0027d8ee0}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb  9 14:07:16.552: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-6mdd5" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-6mdd5,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-9338,SelfLink:/api/v1/namespaces/deployment-9338/pods/test-rolling-update-deployment-79f6b9d75c-6mdd5,UID:85fc60e0-f3d1-4213-b411-7d9a5819330f,ResourceVersion:23703256,Generation:0,CreationTimestamp:2020-02-09 14:07:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c 439cc4c5-aa2c-4728-87fc-a354fbf27539 0xc0027d9897 0xc0027d9898}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6q7h6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6q7h6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-6q7h6 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0027d9910} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0027d9930}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:07:06 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:07:16 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:07:16 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:07:06 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-02-09 14:07:06 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-02-09 14:07:14 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://3b5de13645c94a6b802527030ea6faf817414e9c1ef0e6d69127fc5e3ec967bd}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 14:07:16.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-9338" for this suite.
Feb  9 14:07:22.588: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 14:07:22.705: INFO: namespace deployment-9338 deletion completed in 6.144981693s

• [SLOW TEST:25.492 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 14:07:22.706: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 14:07:34.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-6211" for this suite.
Feb  9 14:07:56.069: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 14:07:56.179: INFO: namespace replication-controller-6211 deletion completed in 22.148198034s

• [SLOW TEST:33.474 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 14:07:56.180: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating server pod server in namespace prestop-3861
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-3861
STEP: Deleting pre-stop pod
Feb  9 14:08:19.692: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 14:08:19.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-3861" for this suite.
Feb  9 14:08:57.748: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 14:08:57.936: INFO: namespace prestop-3861 deletion completed in 38.219494607s

• [SLOW TEST:61.757 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 14:08:57.937: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-2814
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Feb  9 14:08:58.100: INFO: Found 0 stateful pods, waiting for 3
Feb  9 14:09:08.135: INFO: Found 2 stateful pods, waiting for 3
Feb  9 14:09:18.125: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  9 14:09:18.125: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  9 14:09:18.125: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb  9 14:09:28.110: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  9 14:09:28.110: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  9 14:09:28.110: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Feb  9 14:09:28.149: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Feb  9 14:09:38.230: INFO: Updating stateful set ss2
Feb  9 14:09:38.281: INFO: Waiting for Pod statefulset-2814/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Feb  9 14:09:48.634: INFO: Found 2 stateful pods, waiting for 3
Feb  9 14:09:58.656: INFO: Found 2 stateful pods, waiting for 3
Feb  9 14:10:08.656: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  9 14:10:08.656: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  9 14:10:08.656: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb  9 14:10:18.652: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  9 14:10:18.652: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  9 14:10:18.652: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Feb  9 14:10:18.688: INFO: Updating stateful set ss2
Feb  9 14:10:18.705: INFO: Waiting for Pod statefulset-2814/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  9 14:10:29.280: INFO: Updating stateful set ss2
Feb  9 14:10:29.467: INFO: Waiting for StatefulSet statefulset-2814/ss2 to complete update
Feb  9 14:10:29.468: INFO: Waiting for Pod statefulset-2814/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  9 14:10:39.483: INFO: Waiting for StatefulSet statefulset-2814/ss2 to complete update
Feb  9 14:10:39.484: INFO: Waiting for Pod statefulset-2814/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Feb  9 14:10:49.481: INFO: Waiting for StatefulSet statefulset-2814/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Feb  9 14:10:59.486: INFO: Deleting all statefulset in ns statefulset-2814
Feb  9 14:10:59.492: INFO: Scaling statefulset ss2 to 0
Feb  9 14:11:29.536: INFO: Waiting for statefulset status.replicas updated to 0
Feb  9 14:11:29.541: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 14:11:29.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-2814" for this suite.
Feb  9 14:11:35.600: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 14:11:35.766: INFO: namespace statefulset-2814 deletion completed in 6.191799542s

• [SLOW TEST:157.829 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 14:11:35.767: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Feb  9 14:11:58.007: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-902 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  9 14:11:58.007: INFO: >>> kubeConfig: /root/.kube/config
I0209 14:11:58.088367       8 log.go:172] (0xc000f226e0) (0xc001281e00) Create stream
I0209 14:11:58.088481       8 log.go:172] (0xc000f226e0) (0xc001281e00) Stream added, broadcasting: 1
I0209 14:11:58.094405       8 log.go:172] (0xc000f226e0) Reply frame received for 1
I0209 14:11:58.094483       8 log.go:172] (0xc000f226e0) (0xc000ae1720) Create stream
I0209 14:11:58.094539       8 log.go:172] (0xc000f226e0) (0xc000ae1720) Stream added, broadcasting: 3
I0209 14:11:58.101223       8 log.go:172] (0xc000f226e0) Reply frame received for 3
I0209 14:11:58.101289       8 log.go:172] (0xc000f226e0) (0xc000ae17c0) Create stream
I0209 14:11:58.101298       8 log.go:172] (0xc000f226e0) (0xc000ae17c0) Stream added, broadcasting: 5
I0209 14:11:58.103271       8 log.go:172] (0xc000f226e0) Reply frame received for 5
I0209 14:11:58.237534       8 log.go:172] (0xc000f226e0) Data frame received for 3
I0209 14:11:58.237589       8 log.go:172] (0xc000ae1720) (3) Data frame handling
I0209 14:11:58.237629       8 log.go:172] (0xc000ae1720) (3) Data frame sent
I0209 14:11:58.398311       8 log.go:172] (0xc000f226e0) (0xc000ae1720) Stream removed, broadcasting: 3
I0209 14:11:58.398416       8 log.go:172] (0xc000f226e0) Data frame received for 1
I0209 14:11:58.398445       8 log.go:172] (0xc001281e00) (1) Data frame handling
I0209 14:11:58.398464       8 log.go:172] (0xc000f226e0) (0xc000ae17c0) Stream removed, broadcasting: 5
I0209 14:11:58.398527       8 log.go:172] (0xc001281e00) (1) Data frame sent
I0209 14:11:58.398590       8 log.go:172] (0xc000f226e0) (0xc001281e00) Stream removed, broadcasting: 1
I0209 14:11:58.398635       8 log.go:172] (0xc000f226e0) Go away received
I0209 14:11:58.398994       8 log.go:172] (0xc000f226e0) (0xc001281e00) Stream removed, broadcasting: 1
I0209 14:11:58.399019       8 log.go:172] (0xc000f226e0) (0xc000ae1720) Stream removed, broadcasting: 3
I0209 14:11:58.399034       8 log.go:172] (0xc000f226e0) (0xc000ae17c0) Stream removed, broadcasting: 5
Feb  9 14:11:58.399: INFO: Exec stderr: ""
Feb  9 14:11:58.399: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-902 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  9 14:11:58.399: INFO: >>> kubeConfig: /root/.kube/config
I0209 14:11:58.476886       8 log.go:172] (0xc001ae6000) (0xc001e3e000) Create stream
I0209 14:11:58.477088       8 log.go:172] (0xc001ae6000) (0xc001e3e000) Stream added, broadcasting: 1
I0209 14:11:58.488869       8 log.go:172] (0xc001ae6000) Reply frame received for 1
I0209 14:11:58.488939       8 log.go:172] (0xc001ae6000) (0xc001e3e0a0) Create stream
I0209 14:11:58.488951       8 log.go:172] (0xc001ae6000) (0xc001e3e0a0) Stream added, broadcasting: 3
I0209 14:11:58.492389       8 log.go:172] (0xc001ae6000) Reply frame received for 3
I0209 14:11:58.492450       8 log.go:172] (0xc001ae6000) (0xc00131f180) Create stream
I0209 14:11:58.492481       8 log.go:172] (0xc001ae6000) (0xc00131f180) Stream added, broadcasting: 5
I0209 14:11:58.496451       8 log.go:172] (0xc001ae6000) Reply frame received for 5
I0209 14:11:58.651866       8 log.go:172] (0xc001ae6000) Data frame received for 3
I0209 14:11:58.651950       8 log.go:172] (0xc001e3e0a0) (3) Data frame handling
I0209 14:11:58.651972       8 log.go:172] (0xc001e3e0a0) (3) Data frame sent
I0209 14:11:58.751705       8 log.go:172] (0xc001ae6000) Data frame received for 1
I0209 14:11:58.751827       8 log.go:172] (0xc001ae6000) (0xc00131f180) Stream removed, broadcasting: 5
I0209 14:11:58.751890       8 log.go:172] (0xc001e3e000) (1) Data frame handling
I0209 14:11:58.751924       8 log.go:172] (0xc001e3e000) (1) Data frame sent
I0209 14:11:58.752052       8 log.go:172] (0xc001ae6000) (0xc001e3e0a0) Stream removed, broadcasting: 3
I0209 14:11:58.752103       8 log.go:172] (0xc001ae6000) (0xc001e3e000) Stream removed, broadcasting: 1
I0209 14:11:58.752368       8 log.go:172] (0xc001ae6000) (0xc001e3e000) Stream removed, broadcasting: 1
I0209 14:11:58.752386       8 log.go:172] (0xc001ae6000) (0xc001e3e0a0) Stream removed, broadcasting: 3
I0209 14:11:58.752395       8 log.go:172] (0xc001ae6000) (0xc00131f180) Stream removed, broadcasting: 5
Feb  9 14:11:58.752: INFO: Exec stderr: ""
Feb  9 14:11:58.753: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-902 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  9 14:11:58.753: INFO: >>> kubeConfig: /root/.kube/config
I0209 14:11:58.813038       8 log.go:172] (0xc00105c9a0) (0xc00131f860) Create stream
I0209 14:11:58.813199       8 log.go:172] (0xc00105c9a0) (0xc00131f860) Stream added, broadcasting: 1
I0209 14:11:58.823132       8 log.go:172] (0xc00105c9a0) Reply frame received for 1
I0209 14:11:58.823245       8 log.go:172] (0xc00105c9a0) (0xc00039c140) Create stream
I0209 14:11:58.823288       8 log.go:172] (0xc00105c9a0) (0xc00039c140) Stream added, broadcasting: 3
I0209 14:11:58.828893       8 log.go:172] (0xc00105c9a0) Reply frame received for 3
I0209 14:11:58.828922       8 log.go:172] (0xc00105c9a0) (0xc00039c320) Create stream
I0209 14:11:58.828931       8 log.go:172] (0xc00105c9a0) (0xc00039c320) Stream added, broadcasting: 5
I0209 14:11:58.831427       8 log.go:172] (0xc00105c9a0) Reply frame received for 5
I0209 14:11:58.947988       8 log.go:172] (0xc00105c9a0) Data frame received for 3
I0209 14:11:58.948075       8 log.go:172] (0xc00039c140) (3) Data frame handling
I0209 14:11:58.948095       8 log.go:172] (0xc00039c140) (3) Data frame sent
I0209 14:11:59.068695       8 log.go:172] (0xc00105c9a0) (0xc00039c140) Stream removed, broadcasting: 3
I0209 14:11:59.069058       8 log.go:172] (0xc00105c9a0) Data frame received for 1
I0209 14:11:59.069079       8 log.go:172] (0xc00131f860) (1) Data frame handling
I0209 14:11:59.069097       8 log.go:172] (0xc00131f860) (1) Data frame sent
I0209 14:11:59.069111       8 log.go:172] (0xc00105c9a0) (0xc00131f860) Stream removed, broadcasting: 1
I0209 14:11:59.069398       8 log.go:172] (0xc00105c9a0) (0xc00039c320) Stream removed, broadcasting: 5
I0209 14:11:59.069443       8 log.go:172] (0xc00105c9a0) (0xc00131f860) Stream removed, broadcasting: 1
I0209 14:11:59.069455       8 log.go:172] (0xc00105c9a0) (0xc00039c140) Stream removed, broadcasting: 3
I0209 14:11:59.069466       8 log.go:172] (0xc00105c9a0) (0xc00039c320) Stream removed, broadcasting: 5
I0209 14:11:59.069831       8 log.go:172] (0xc00105c9a0) Go away received
Feb  9 14:11:59.069: INFO: Exec stderr: ""
Feb  9 14:11:59.070: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-902 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  9 14:11:59.070: INFO: >>> kubeConfig: /root/.kube/config
I0209 14:11:59.145517       8 log.go:172] (0xc000f23550) (0xc00039c960) Create stream
I0209 14:11:59.145635       8 log.go:172] (0xc000f23550) (0xc00039c960) Stream added, broadcasting: 1
I0209 14:11:59.152149       8 log.go:172] (0xc000f23550) Reply frame received for 1
I0209 14:11:59.152268       8 log.go:172] (0xc000f23550) (0xc001e3e320) Create stream
I0209 14:11:59.152278       8 log.go:172] (0xc000f23550) (0xc001e3e320) Stream added, broadcasting: 3
I0209 14:11:59.153950       8 log.go:172] (0xc000f23550) Reply frame received for 3
I0209 14:11:59.154015       8 log.go:172] (0xc000f23550) (0xc00131fa40) Create stream
I0209 14:11:59.154028       8 log.go:172] (0xc000f23550) (0xc00131fa40) Stream added, broadcasting: 5
I0209 14:11:59.155229       8 log.go:172] (0xc000f23550) Reply frame received for 5
I0209 14:11:59.248233       8 log.go:172] (0xc000f23550) Data frame received for 3
I0209 14:11:59.248482       8 log.go:172] (0xc001e3e320) (3) Data frame handling
I0209 14:11:59.248566       8 log.go:172] (0xc001e3e320) (3) Data frame sent
I0209 14:11:59.374367       8 log.go:172] (0xc000f23550) Data frame received for 1
I0209 14:11:59.374466       8 log.go:172] (0xc000f23550) (0xc00131fa40) Stream removed, broadcasting: 5
I0209 14:11:59.374530       8 log.go:172] (0xc00039c960) (1) Data frame handling
I0209 14:11:59.374584       8 log.go:172] (0xc00039c960) (1) Data frame sent
I0209 14:11:59.374593       8 log.go:172] (0xc000f23550) (0xc001e3e320) Stream removed, broadcasting: 3
I0209 14:11:59.374620       8 log.go:172] (0xc000f23550) (0xc00039c960) Stream removed, broadcasting: 1
I0209 14:11:59.374647       8 log.go:172] (0xc000f23550) Go away received
I0209 14:11:59.375205       8 log.go:172] (0xc000f23550) (0xc00039c960) Stream removed, broadcasting: 1
I0209 14:11:59.375270       8 log.go:172] (0xc000f23550) (0xc001e3e320) Stream removed, broadcasting: 3
I0209 14:11:59.375292       8 log.go:172] (0xc000f23550) (0xc00131fa40) Stream removed, broadcasting: 5
Feb  9 14:11:59.375: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Feb  9 14:11:59.375: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-902 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  9 14:11:59.375: INFO: >>> kubeConfig: /root/.kube/config
I0209 14:11:59.434259       8 log.go:172] (0xc001b124d0) (0xc001f5e5a0) Create stream
I0209 14:11:59.434323       8 log.go:172] (0xc001b124d0) (0xc001f5e5a0) Stream added, broadcasting: 1
I0209 14:11:59.440349       8 log.go:172] (0xc001b124d0) Reply frame received for 1
I0209 14:11:59.440392       8 log.go:172] (0xc001b124d0) (0xc001f5e640) Create stream
I0209 14:11:59.440406       8 log.go:172] (0xc001b124d0) (0xc001f5e640) Stream added, broadcasting: 3
I0209 14:11:59.442145       8 log.go:172] (0xc001b124d0) Reply frame received for 3
I0209 14:11:59.442164       8 log.go:172] (0xc001b124d0) (0xc001e3e500) Create stream
I0209 14:11:59.442177       8 log.go:172] (0xc001b124d0) (0xc001e3e500) Stream added, broadcasting: 5
I0209 14:11:59.443322       8 log.go:172] (0xc001b124d0) Reply frame received for 5
I0209 14:11:59.554464       8 log.go:172] (0xc001b124d0) Data frame received for 3
I0209 14:11:59.554532       8 log.go:172] (0xc001f5e640) (3) Data frame handling
I0209 14:11:59.554592       8 log.go:172] (0xc001f5e640) (3) Data frame sent
I0209 14:11:59.691354       8 log.go:172] (0xc001b124d0) Data frame received for 1
I0209 14:11:59.691487       8 log.go:172] (0xc001f5e5a0) (1) Data frame handling
I0209 14:11:59.691515       8 log.go:172] (0xc001f5e5a0) (1) Data frame sent
I0209 14:11:59.691551       8 log.go:172] (0xc001b124d0) (0xc001f5e5a0) Stream removed, broadcasting: 1
I0209 14:11:59.691911       8 log.go:172] (0xc001b124d0) (0xc001f5e640) Stream removed, broadcasting: 3
I0209 14:11:59.692085       8 log.go:172] (0xc001b124d0) (0xc001e3e500) Stream removed, broadcasting: 5
I0209 14:11:59.692119       8 log.go:172] (0xc001b124d0) Go away received
I0209 14:11:59.692252       8 log.go:172] (0xc001b124d0) (0xc001f5e5a0) Stream removed, broadcasting: 1
I0209 14:11:59.692274       8 log.go:172] (0xc001b124d0) (0xc001f5e640) Stream removed, broadcasting: 3
I0209 14:11:59.692292       8 log.go:172] (0xc001b124d0) (0xc001e3e500) Stream removed, broadcasting: 5
Feb  9 14:11:59.692: INFO: Exec stderr: ""
Feb  9 14:11:59.692: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-902 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  9 14:11:59.692: INFO: >>> kubeConfig: /root/.kube/config
I0209 14:11:59.759865       8 log.go:172] (0xc000f23b80) (0xc00039ca00) Create stream
I0209 14:11:59.759947       8 log.go:172] (0xc000f23b80) (0xc00039ca00) Stream added, broadcasting: 1
I0209 14:11:59.768227       8 log.go:172] (0xc000f23b80) Reply frame received for 1
I0209 14:11:59.768257       8 log.go:172] (0xc000f23b80) (0xc00039cb40) Create stream
I0209 14:11:59.768266       8 log.go:172] (0xc000f23b80) (0xc00039cb40) Stream added, broadcasting: 3
I0209 14:11:59.770177       8 log.go:172] (0xc000f23b80) Reply frame received for 3
I0209 14:11:59.770217       8 log.go:172] (0xc000f23b80) (0xc001c66460) Create stream
I0209 14:11:59.770232       8 log.go:172] (0xc000f23b80) (0xc001c66460) Stream added, broadcasting: 5
I0209 14:11:59.775189       8 log.go:172] (0xc000f23b80) Reply frame received for 5
I0209 14:11:59.868702       8 log.go:172] (0xc000f23b80) Data frame received for 3
I0209 14:11:59.868771       8 log.go:172] (0xc00039cb40) (3) Data frame handling
I0209 14:11:59.868790       8 log.go:172] (0xc00039cb40) (3) Data frame sent
I0209 14:12:00.007506       8 log.go:172] (0xc000f23b80) (0xc001c66460) Stream removed, broadcasting: 5
I0209 14:12:00.007793       8 log.go:172] (0xc000f23b80) Data frame received for 1
I0209 14:12:00.007825       8 log.go:172] (0xc000f23b80) (0xc00039cb40) Stream removed, broadcasting: 3
I0209 14:12:00.007898       8 log.go:172] (0xc00039ca00) (1) Data frame handling
I0209 14:12:00.007926       8 log.go:172] (0xc00039ca00) (1) Data frame sent
I0209 14:12:00.007968       8 log.go:172] (0xc000f23b80) (0xc00039ca00) Stream removed, broadcasting: 1
I0209 14:12:00.008008       8 log.go:172] (0xc000f23b80) Go away received
I0209 14:12:00.008619       8 log.go:172] (0xc000f23b80) (0xc00039ca00) Stream removed, broadcasting: 1
I0209 14:12:00.008670       8 log.go:172] (0xc000f23b80) (0xc00039cb40) Stream removed, broadcasting: 3
I0209 14:12:00.008689       8 log.go:172] (0xc000f23b80) (0xc001c66460) Stream removed, broadcasting: 5
Feb  9 14:12:00.008: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Feb  9 14:12:00.008: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-902 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  9 14:12:00.008: INFO: >>> kubeConfig: /root/.kube/config
I0209 14:12:00.072268       8 log.go:172] (0xc002066840) (0xc00039d400) Create stream
I0209 14:12:00.072401       8 log.go:172] (0xc002066840) (0xc00039d400) Stream added, broadcasting: 1
I0209 14:12:00.077889       8 log.go:172] (0xc002066840) Reply frame received for 1
I0209 14:12:00.077951       8 log.go:172] (0xc002066840) (0xc001f5e6e0) Create stream
I0209 14:12:00.077964       8 log.go:172] (0xc002066840) (0xc001f5e6e0) Stream added, broadcasting: 3
I0209 14:12:00.080506       8 log.go:172] (0xc002066840) Reply frame received for 3
I0209 14:12:00.080534       8 log.go:172] (0xc002066840) (0xc001e3e8c0) Create stream
I0209 14:12:00.080544       8 log.go:172] (0xc002066840) (0xc001e3e8c0) Stream added, broadcasting: 5
I0209 14:12:00.082154       8 log.go:172] (0xc002066840) Reply frame received for 5
I0209 14:12:00.168732       8 log.go:172] (0xc002066840) Data frame received for 3
I0209 14:12:00.168769       8 log.go:172] (0xc001f5e6e0) (3) Data frame handling
I0209 14:12:00.168784       8 log.go:172] (0xc001f5e6e0) (3) Data frame sent
I0209 14:12:00.293372       8 log.go:172] (0xc002066840) (0xc001f5e6e0) Stream removed, broadcasting: 3
I0209 14:12:00.293489       8 log.go:172] (0xc002066840) Data frame received for 1
I0209 14:12:00.293517       8 log.go:172] (0xc002066840) (0xc001e3e8c0) Stream removed, broadcasting: 5
I0209 14:12:00.293550       8 log.go:172] (0xc00039d400) (1) Data frame handling
I0209 14:12:00.293574       8 log.go:172] (0xc00039d400) (1) Data frame sent
I0209 14:12:00.293587       8 log.go:172] (0xc002066840) (0xc00039d400) Stream removed, broadcasting: 1
I0209 14:12:00.293612       8 log.go:172] (0xc002066840) Go away received
I0209 14:12:00.293867       8 log.go:172] (0xc002066840) (0xc00039d400) Stream removed, broadcasting: 1
I0209 14:12:00.293878       8 log.go:172] (0xc002066840) (0xc001f5e6e0) Stream removed, broadcasting: 3
I0209 14:12:00.293884       8 log.go:172] (0xc002066840) (0xc001e3e8c0) Stream removed, broadcasting: 5
Feb  9 14:12:00.293: INFO: Exec stderr: ""
Feb  9 14:12:00.294: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-902 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  9 14:12:00.294: INFO: >>> kubeConfig: /root/.kube/config
I0209 14:12:00.409082       8 log.go:172] (0xc00105de40) (0xc0023aa0a0) Create stream
I0209 14:12:00.409193       8 log.go:172] (0xc00105de40) (0xc0023aa0a0) Stream added, broadcasting: 1
I0209 14:12:00.415483       8 log.go:172] (0xc00105de40) Reply frame received for 1
I0209 14:12:00.415544       8 log.go:172] (0xc00105de40) (0xc00039d900) Create stream
I0209 14:12:00.415565       8 log.go:172] (0xc00105de40) (0xc00039d900) Stream added, broadcasting: 3
I0209 14:12:00.417318       8 log.go:172] (0xc00105de40) Reply frame received for 3
I0209 14:12:00.417337       8 log.go:172] (0xc00105de40) (0xc001e3e960) Create stream
I0209 14:12:00.417346       8 log.go:172] (0xc00105de40) (0xc001e3e960) Stream added, broadcasting: 5
I0209 14:12:00.419287       8 log.go:172] (0xc00105de40) Reply frame received for 5
I0209 14:12:00.796145       8 log.go:172] (0xc00105de40) Data frame received for 3
I0209 14:12:00.796301       8 log.go:172] (0xc00039d900) (3) Data frame handling
I0209 14:12:00.796330       8 log.go:172] (0xc00039d900) (3) Data frame sent
I0209 14:12:00.947383       8 log.go:172] (0xc00105de40) (0xc00039d900) Stream removed, broadcasting: 3
I0209 14:12:00.947576       8 log.go:172] (0xc00105de40) Data frame received for 1
I0209 14:12:00.947590       8 log.go:172] (0xc0023aa0a0) (1) Data frame handling
I0209 14:12:00.947599       8 log.go:172] (0xc0023aa0a0) (1) Data frame sent
I0209 14:12:00.947605       8 log.go:172] (0xc00105de40) (0xc0023aa0a0) Stream removed, broadcasting: 1
I0209 14:12:00.947858       8 log.go:172] (0xc00105de40) (0xc001e3e960) Stream removed, broadcasting: 5
I0209 14:12:00.947897       8 log.go:172] (0xc00105de40) (0xc0023aa0a0) Stream removed, broadcasting: 1
I0209 14:12:00.947903       8 log.go:172] (0xc00105de40) (0xc00039d900) Stream removed, broadcasting: 3
I0209 14:12:00.947907       8 log.go:172] (0xc00105de40) (0xc001e3e960) Stream removed, broadcasting: 5
I0209 14:12:00.948122       8 log.go:172] (0xc00105de40) Go away received
Feb  9 14:12:00.948: INFO: Exec stderr: ""
Feb  9 14:12:00.948: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-902 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  9 14:12:00.948: INFO: >>> kubeConfig: /root/.kube/config
I0209 14:12:01.007159       8 log.go:172] (0xc002248d10) (0xc001e3ed20) Create stream
I0209 14:12:01.007296       8 log.go:172] (0xc002248d10) (0xc001e3ed20) Stream added, broadcasting: 1
I0209 14:12:01.017145       8 log.go:172] (0xc002248d10) Reply frame received for 1
I0209 14:12:01.017282       8 log.go:172] (0xc002248d10) (0xc001f5e780) Create stream
I0209 14:12:01.017294       8 log.go:172] (0xc002248d10) (0xc001f5e780) Stream added, broadcasting: 3
I0209 14:12:01.018955       8 log.go:172] (0xc002248d10) Reply frame received for 3
I0209 14:12:01.018977       8 log.go:172] (0xc002248d10) (0xc001e3edc0) Create stream
I0209 14:12:01.018985       8 log.go:172] (0xc002248d10) (0xc001e3edc0) Stream added, broadcasting: 5
I0209 14:12:01.020447       8 log.go:172] (0xc002248d10) Reply frame received for 5
I0209 14:12:01.128939       8 log.go:172] (0xc002248d10) Data frame received for 3
I0209 14:12:01.129050       8 log.go:172] (0xc001f5e780) (3) Data frame handling
I0209 14:12:01.129102       8 log.go:172] (0xc001f5e780) (3) Data frame sent
I0209 14:12:01.227734       8 log.go:172] (0xc002248d10) Data frame received for 1
I0209 14:12:01.227922       8 log.go:172] (0xc001e3ed20) (1) Data frame handling
I0209 14:12:01.227969       8 log.go:172] (0xc001e3ed20) (1) Data frame sent
I0209 14:12:01.228491       8 log.go:172] (0xc002248d10) (0xc001e3edc0) Stream removed, broadcasting: 5
I0209 14:12:01.228551       8 log.go:172] (0xc002248d10) (0xc001f5e780) Stream removed, broadcasting: 3
I0209 14:12:01.228778       8 log.go:172] (0xc002248d10) (0xc001e3ed20) Stream removed, broadcasting: 1
I0209 14:12:01.228862       8 log.go:172] (0xc002248d10) Go away received
I0209 14:12:01.229202       8 log.go:172] (0xc002248d10) (0xc001e3ed20) Stream removed, broadcasting: 1
I0209 14:12:01.229247       8 log.go:172] (0xc002248d10) (0xc001f5e780) Stream removed, broadcasting: 3
I0209 14:12:01.229260       8 log.go:172] (0xc002248d10) (0xc001e3edc0) Stream removed, broadcasting: 5
Feb  9 14:12:01.229: INFO: Exec stderr: ""
Feb  9 14:12:01.229: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-902 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  9 14:12:01.229: INFO: >>> kubeConfig: /root/.kube/config
I0209 14:12:01.288695       8 log.go:172] (0xc002249760) (0xc001e3f180) Create stream
I0209 14:12:01.288768       8 log.go:172] (0xc002249760) (0xc001e3f180) Stream added, broadcasting: 1
I0209 14:12:01.293518       8 log.go:172] (0xc002249760) Reply frame received for 1
I0209 14:12:01.293544       8 log.go:172] (0xc002249760) (0xc001f5e820) Create stream
I0209 14:12:01.293552       8 log.go:172] (0xc002249760) (0xc001f5e820) Stream added, broadcasting: 3
I0209 14:12:01.295184       8 log.go:172] (0xc002249760) Reply frame received for 3
I0209 14:12:01.295218       8 log.go:172] (0xc002249760) (0xc001c66640) Create stream
I0209 14:12:01.295230       8 log.go:172] (0xc002249760) (0xc001c66640) Stream added, broadcasting: 5
I0209 14:12:01.296729       8 log.go:172] (0xc002249760) Reply frame received for 5
I0209 14:12:01.385046       8 log.go:172] (0xc002249760) Data frame received for 3
I0209 14:12:01.385112       8 log.go:172] (0xc001f5e820) (3) Data frame handling
I0209 14:12:01.385130       8 log.go:172] (0xc001f5e820) (3) Data frame sent
I0209 14:12:01.468131       8 log.go:172] (0xc002249760) Data frame received for 1
I0209 14:12:01.468208       8 log.go:172] (0xc001e3f180) (1) Data frame handling
I0209 14:12:01.468241       8 log.go:172] (0xc001e3f180) (1) Data frame sent
I0209 14:12:01.468287       8 log.go:172] (0xc002249760) (0xc001e3f180) Stream removed, broadcasting: 1
I0209 14:12:01.469444       8 log.go:172] (0xc002249760) (0xc001c66640) Stream removed, broadcasting: 5
I0209 14:12:01.469492       8 log.go:172] (0xc002249760) (0xc001f5e820) Stream removed, broadcasting: 3
I0209 14:12:01.469532       8 log.go:172] (0xc002249760) (0xc001e3f180) Stream removed, broadcasting: 1
I0209 14:12:01.469543       8 log.go:172] (0xc002249760) (0xc001f5e820) Stream removed, broadcasting: 3
I0209 14:12:01.469562       8 log.go:172] (0xc002249760) (0xc001c66640) Stream removed, broadcasting: 5
I0209 14:12:01.469972       8 log.go:172] (0xc002249760) Go away received
Feb  9 14:12:01.470: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 14:12:01.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-902" for this suite.
Feb  9 14:12:45.502: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 14:12:45.635: INFO: namespace e2e-kubelet-etc-hosts-902 deletion completed in 44.154578169s

• [SLOW TEST:69.868 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 14:12:45.637: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9245.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9245.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb  9 14:13:01.853: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod dns-9245/dns-test-705e92b4-aac5-424e-978b-2554a07b6c90: the server could not find the requested resource (get pods dns-test-705e92b4-aac5-424e-978b-2554a07b6c90)
Feb  9 14:13:01.874: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-9245/dns-test-705e92b4-aac5-424e-978b-2554a07b6c90: the server could not find the requested resource (get pods dns-test-705e92b4-aac5-424e-978b-2554a07b6c90)
Feb  9 14:13:01.889: INFO: Unable to read wheezy_udp@PodARecord from pod dns-9245/dns-test-705e92b4-aac5-424e-978b-2554a07b6c90: the server could not find the requested resource (get pods dns-test-705e92b4-aac5-424e-978b-2554a07b6c90)
Feb  9 14:13:01.898: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-9245/dns-test-705e92b4-aac5-424e-978b-2554a07b6c90: the server could not find the requested resource (get pods dns-test-705e92b4-aac5-424e-978b-2554a07b6c90)
Feb  9 14:13:01.903: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod dns-9245/dns-test-705e92b4-aac5-424e-978b-2554a07b6c90: the server could not find the requested resource (get pods dns-test-705e92b4-aac5-424e-978b-2554a07b6c90)
Feb  9 14:13:01.909: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod dns-9245/dns-test-705e92b4-aac5-424e-978b-2554a07b6c90: the server could not find the requested resource (get pods dns-test-705e92b4-aac5-424e-978b-2554a07b6c90)
Feb  9 14:13:01.917: INFO: Unable to read jessie_udp@PodARecord from pod dns-9245/dns-test-705e92b4-aac5-424e-978b-2554a07b6c90: the server could not find the requested resource (get pods dns-test-705e92b4-aac5-424e-978b-2554a07b6c90)
Feb  9 14:13:01.924: INFO: Unable to read jessie_tcp@PodARecord from pod dns-9245/dns-test-705e92b4-aac5-424e-978b-2554a07b6c90: the server could not find the requested resource (get pods dns-test-705e92b4-aac5-424e-978b-2554a07b6c90)
Feb  9 14:13:01.924: INFO: Lookups using dns-9245/dns-test-705e92b4-aac5-424e-978b-2554a07b6c90 failed for: [wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord]

Feb  9 14:13:07.030: INFO: DNS probes using dns-9245/dns-test-705e92b4-aac5-424e-978b-2554a07b6c90 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 14:13:07.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-9245" for this suite.
Feb  9 14:13:13.219: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 14:13:13.376: INFO: namespace dns-9245 deletion completed in 6.219720421s

• [SLOW TEST:27.739 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 14:13:13.377: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 14:13:18.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-2489" for this suite.
Feb  9 14:13:25.073: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 14:13:25.181: INFO: namespace watch-2489 deletion completed in 6.178420896s

• [SLOW TEST:11.805 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 14:13:25.181: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-72573a3e-b6c8-44b9-8fdb-409193c8a6a8
STEP: Creating a pod to test consume secrets
Feb  9 14:13:25.328: INFO: Waiting up to 5m0s for pod "pod-secrets-71d4af5e-ff20-4e2f-a22d-c3a57ba72012" in namespace "secrets-8246" to be "success or failure"
Feb  9 14:13:25.335: INFO: Pod "pod-secrets-71d4af5e-ff20-4e2f-a22d-c3a57ba72012": Phase="Pending", Reason="", readiness=false. Elapsed: 6.24173ms
Feb  9 14:13:27.343: INFO: Pod "pod-secrets-71d4af5e-ff20-4e2f-a22d-c3a57ba72012": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014625403s
Feb  9 14:13:29.358: INFO: Pod "pod-secrets-71d4af5e-ff20-4e2f-a22d-c3a57ba72012": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029754393s
Feb  9 14:13:31.369: INFO: Pod "pod-secrets-71d4af5e-ff20-4e2f-a22d-c3a57ba72012": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040618433s
Feb  9 14:13:33.399: INFO: Pod "pod-secrets-71d4af5e-ff20-4e2f-a22d-c3a57ba72012": Phase="Pending", Reason="", readiness=false. Elapsed: 8.071072608s
Feb  9 14:13:35.413: INFO: Pod "pod-secrets-71d4af5e-ff20-4e2f-a22d-c3a57ba72012": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.084175642s
STEP: Saw pod success
Feb  9 14:13:35.413: INFO: Pod "pod-secrets-71d4af5e-ff20-4e2f-a22d-c3a57ba72012" satisfied condition "success or failure"
Feb  9 14:13:35.419: INFO: Trying to get logs from node iruya-node pod pod-secrets-71d4af5e-ff20-4e2f-a22d-c3a57ba72012 container secret-env-test: 
STEP: delete the pod
Feb  9 14:13:35.515: INFO: Waiting for pod pod-secrets-71d4af5e-ff20-4e2f-a22d-c3a57ba72012 to disappear
Feb  9 14:13:35.524: INFO: Pod pod-secrets-71d4af5e-ff20-4e2f-a22d-c3a57ba72012 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 14:13:35.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8246" for this suite.
Feb  9 14:13:41.553: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 14:13:41.742: INFO: namespace secrets-8246 deletion completed in 6.212392236s

• [SLOW TEST:16.561 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 14:13:41.743: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Feb  9 14:13:41.828: INFO: Waiting up to 5m0s for pod "pod-c587ab13-c8ce-4b74-b65a-36c612acf906" in namespace "emptydir-1661" to be "success or failure"
Feb  9 14:13:41.904: INFO: Pod "pod-c587ab13-c8ce-4b74-b65a-36c612acf906": Phase="Pending", Reason="", readiness=false. Elapsed: 75.740536ms
Feb  9 14:13:43.922: INFO: Pod "pod-c587ab13-c8ce-4b74-b65a-36c612acf906": Phase="Pending", Reason="", readiness=false. Elapsed: 2.093493337s
Feb  9 14:13:45.933: INFO: Pod "pod-c587ab13-c8ce-4b74-b65a-36c612acf906": Phase="Pending", Reason="", readiness=false. Elapsed: 4.104683701s
Feb  9 14:13:47.949: INFO: Pod "pod-c587ab13-c8ce-4b74-b65a-36c612acf906": Phase="Pending", Reason="", readiness=false. Elapsed: 6.120932808s
Feb  9 14:13:49.962: INFO: Pod "pod-c587ab13-c8ce-4b74-b65a-36c612acf906": Phase="Pending", Reason="", readiness=false. Elapsed: 8.13326215s
Feb  9 14:13:51.971: INFO: Pod "pod-c587ab13-c8ce-4b74-b65a-36c612acf906": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.142207244s
STEP: Saw pod success
Feb  9 14:13:51.971: INFO: Pod "pod-c587ab13-c8ce-4b74-b65a-36c612acf906" satisfied condition "success or failure"
Feb  9 14:13:51.975: INFO: Trying to get logs from node iruya-node pod pod-c587ab13-c8ce-4b74-b65a-36c612acf906 container test-container: 
STEP: delete the pod
Feb  9 14:13:52.127: INFO: Waiting for pod pod-c587ab13-c8ce-4b74-b65a-36c612acf906 to disappear
Feb  9 14:13:52.137: INFO: Pod pod-c587ab13-c8ce-4b74-b65a-36c612acf906 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 14:13:52.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1661" for this suite.
Feb  9 14:13:58.175: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 14:13:58.275: INFO: namespace emptydir-1661 deletion completed in 6.128402201s

• [SLOW TEST:16.532 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 14:13:58.276: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
Feb  9 14:13:59.347: INFO: created pod pod-service-account-defaultsa
Feb  9 14:13:59.347: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Feb  9 14:13:59.425: INFO: created pod pod-service-account-mountsa
Feb  9 14:13:59.425: INFO: pod pod-service-account-mountsa service account token volume mount: true
Feb  9 14:13:59.439: INFO: created pod pod-service-account-nomountsa
Feb  9 14:13:59.439: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Feb  9 14:13:59.474: INFO: created pod pod-service-account-defaultsa-mountspec
Feb  9 14:13:59.474: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Feb  9 14:13:59.640: INFO: created pod pod-service-account-mountsa-mountspec
Feb  9 14:13:59.640: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Feb  9 14:13:59.695: INFO: created pod pod-service-account-nomountsa-mountspec
Feb  9 14:13:59.695: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Feb  9 14:13:59.796: INFO: created pod pod-service-account-defaultsa-nomountspec
Feb  9 14:13:59.796: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Feb  9 14:14:00.158: INFO: created pod pod-service-account-mountsa-nomountspec
Feb  9 14:14:00.159: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Feb  9 14:14:00.183: INFO: created pod pod-service-account-nomountsa-nomountspec
Feb  9 14:14:00.183: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 14:14:00.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-9893" for this suite.
Feb  9 14:14:34.462: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 14:14:34.606: INFO: namespace svcaccounts-9893 deletion completed in 33.950704409s

• [SLOW TEST:36.331 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 14:14:34.607: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 14:14:44.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-7035" for this suite.
Feb  9 14:15:46.916: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 14:15:47.158: INFO: namespace kubelet-test-7035 deletion completed in 1m2.270909162s

• [SLOW TEST:72.551 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 14:15:47.158: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on tmpfs
Feb  9 14:15:47.326: INFO: Waiting up to 5m0s for pod "pod-d55b2943-1076-4f3b-9494-e899c0c35d14" in namespace "emptydir-6182" to be "success or failure"
Feb  9 14:15:47.339: INFO: Pod "pod-d55b2943-1076-4f3b-9494-e899c0c35d14": Phase="Pending", Reason="", readiness=false. Elapsed: 13.309416ms
Feb  9 14:15:49.766: INFO: Pod "pod-d55b2943-1076-4f3b-9494-e899c0c35d14": Phase="Pending", Reason="", readiness=false. Elapsed: 2.440516574s
Feb  9 14:15:51.776: INFO: Pod "pod-d55b2943-1076-4f3b-9494-e899c0c35d14": Phase="Pending", Reason="", readiness=false. Elapsed: 4.450153595s
Feb  9 14:15:53.792: INFO: Pod "pod-d55b2943-1076-4f3b-9494-e899c0c35d14": Phase="Pending", Reason="", readiness=false. Elapsed: 6.466495199s
Feb  9 14:15:55.798: INFO: Pod "pod-d55b2943-1076-4f3b-9494-e899c0c35d14": Phase="Pending", Reason="", readiness=false. Elapsed: 8.472339369s
Feb  9 14:15:57.807: INFO: Pod "pod-d55b2943-1076-4f3b-9494-e899c0c35d14": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.481314479s
STEP: Saw pod success
Feb  9 14:15:57.807: INFO: Pod "pod-d55b2943-1076-4f3b-9494-e899c0c35d14" satisfied condition "success or failure"
Feb  9 14:15:57.829: INFO: Trying to get logs from node iruya-node pod pod-d55b2943-1076-4f3b-9494-e899c0c35d14 container test-container: 
STEP: delete the pod
Feb  9 14:15:57.898: INFO: Waiting for pod pod-d55b2943-1076-4f3b-9494-e899c0c35d14 to disappear
Feb  9 14:15:57.980: INFO: Pod pod-d55b2943-1076-4f3b-9494-e899c0c35d14 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 14:15:57.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6182" for this suite.
Feb  9 14:16:04.015: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 14:16:04.097: INFO: namespace emptydir-6182 deletion completed in 6.108722254s

• [SLOW TEST:16.939 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 14:16:04.098: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's command
Feb  9 14:16:04.245: INFO: Waiting up to 5m0s for pod "var-expansion-44a59e95-9bc1-4e78-92b0-774e3ed5d0f8" in namespace "var-expansion-5438" to be "success or failure"
Feb  9 14:16:04.269: INFO: Pod "var-expansion-44a59e95-9bc1-4e78-92b0-774e3ed5d0f8": Phase="Pending", Reason="", readiness=false. Elapsed: 23.493437ms
Feb  9 14:16:06.276: INFO: Pod "var-expansion-44a59e95-9bc1-4e78-92b0-774e3ed5d0f8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030941958s
Feb  9 14:16:08.283: INFO: Pod "var-expansion-44a59e95-9bc1-4e78-92b0-774e3ed5d0f8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037796405s
Feb  9 14:16:10.381: INFO: Pod "var-expansion-44a59e95-9bc1-4e78-92b0-774e3ed5d0f8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.135943173s
Feb  9 14:16:12.401: INFO: Pod "var-expansion-44a59e95-9bc1-4e78-92b0-774e3ed5d0f8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.155655397s
Feb  9 14:16:14.416: INFO: Pod "var-expansion-44a59e95-9bc1-4e78-92b0-774e3ed5d0f8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.170755939s
STEP: Saw pod success
Feb  9 14:16:14.416: INFO: Pod "var-expansion-44a59e95-9bc1-4e78-92b0-774e3ed5d0f8" satisfied condition "success or failure"
Feb  9 14:16:14.423: INFO: Trying to get logs from node iruya-node pod var-expansion-44a59e95-9bc1-4e78-92b0-774e3ed5d0f8 container dapi-container: 
STEP: delete the pod
Feb  9 14:16:14.605: INFO: Waiting for pod var-expansion-44a59e95-9bc1-4e78-92b0-774e3ed5d0f8 to disappear
Feb  9 14:16:14.612: INFO: Pod var-expansion-44a59e95-9bc1-4e78-92b0-774e3ed5d0f8 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 14:16:14.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-5438" for this suite.
Feb  9 14:16:20.639: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 14:16:20.799: INFO: namespace var-expansion-5438 deletion completed in 6.180906484s

• [SLOW TEST:16.701 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 14:16:20.799: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  9 14:16:20.941: INFO: (0) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 19.918469ms)
Feb  9 14:16:20.949: INFO: (1) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.94185ms)
Feb  9 14:16:20.960: INFO: (2) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.536929ms)
Feb  9 14:16:20.988: INFO: (3) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 28.220671ms)
Feb  9 14:16:20.995: INFO: (4) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.534788ms)
Feb  9 14:16:21.000: INFO: (5) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.589504ms)
Feb  9 14:16:21.008: INFO: (6) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.311044ms)
Feb  9 14:16:21.014: INFO: (7) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.552593ms)
Feb  9 14:16:21.020: INFO: (8) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.239588ms)
Feb  9 14:16:21.025: INFO: (9) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.162299ms)
Feb  9 14:16:21.032: INFO: (10) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.083085ms)
Feb  9 14:16:21.038: INFO: (11) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.34015ms)
Feb  9 14:16:21.043: INFO: (12) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.409655ms)
Feb  9 14:16:21.048: INFO: (13) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.135008ms)
Feb  9 14:16:21.054: INFO: (14) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.882358ms)
Feb  9 14:16:21.060: INFO: (15) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.443852ms)
Feb  9 14:16:21.067: INFO: (16) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.837627ms)
Feb  9 14:16:21.073: INFO: (17) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.79303ms)
Feb  9 14:16:21.077: INFO: (18) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.604466ms)
Feb  9 14:16:21.081: INFO: (19) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.217348ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 14:16:21.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-2512" for this suite.
Feb  9 14:16:27.136: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 14:16:27.292: INFO: namespace proxy-2512 deletion completed in 6.177280101s

• [SLOW TEST:6.493 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 14:16:27.292: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-d73e8fef-0ac1-45da-8853-32b90219a297
STEP: Creating a pod to test consume configMaps
Feb  9 14:16:27.386: INFO: Waiting up to 5m0s for pod "pod-configmaps-11ba62a1-2ebf-4880-aeb9-1c59a1f3d93e" in namespace "configmap-3297" to be "success or failure"
Feb  9 14:16:27.447: INFO: Pod "pod-configmaps-11ba62a1-2ebf-4880-aeb9-1c59a1f3d93e": Phase="Pending", Reason="", readiness=false. Elapsed: 60.434111ms
Feb  9 14:16:29.458: INFO: Pod "pod-configmaps-11ba62a1-2ebf-4880-aeb9-1c59a1f3d93e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071405402s
Feb  9 14:16:31.469: INFO: Pod "pod-configmaps-11ba62a1-2ebf-4880-aeb9-1c59a1f3d93e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.082559527s
Feb  9 14:16:33.477: INFO: Pod "pod-configmaps-11ba62a1-2ebf-4880-aeb9-1c59a1f3d93e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.090164369s
Feb  9 14:16:35.483: INFO: Pod "pod-configmaps-11ba62a1-2ebf-4880-aeb9-1c59a1f3d93e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.096764704s
Feb  9 14:16:37.495: INFO: Pod "pod-configmaps-11ba62a1-2ebf-4880-aeb9-1c59a1f3d93e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.108364486s
STEP: Saw pod success
Feb  9 14:16:37.495: INFO: Pod "pod-configmaps-11ba62a1-2ebf-4880-aeb9-1c59a1f3d93e" satisfied condition "success or failure"
Feb  9 14:16:37.500: INFO: Trying to get logs from node iruya-node pod pod-configmaps-11ba62a1-2ebf-4880-aeb9-1c59a1f3d93e container configmap-volume-test: 
STEP: delete the pod
Feb  9 14:16:37.555: INFO: Waiting for pod pod-configmaps-11ba62a1-2ebf-4880-aeb9-1c59a1f3d93e to disappear
Feb  9 14:16:37.562: INFO: Pod pod-configmaps-11ba62a1-2ebf-4880-aeb9-1c59a1f3d93e no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 14:16:37.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3297" for this suite.
Feb  9 14:16:43.603: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 14:16:43.735: INFO: namespace configmap-3297 deletion completed in 6.167042065s

• [SLOW TEST:16.443 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 14:16:43.736: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  9 14:16:43.888: INFO: Waiting up to 5m0s for pod "downwardapi-volume-71ef05e2-2a02-4e3f-9ad9-7b4319056359" in namespace "downward-api-8216" to be "success or failure"
Feb  9 14:16:43.941: INFO: Pod "downwardapi-volume-71ef05e2-2a02-4e3f-9ad9-7b4319056359": Phase="Pending", Reason="", readiness=false. Elapsed: 53.057474ms
Feb  9 14:16:45.955: INFO: Pod "downwardapi-volume-71ef05e2-2a02-4e3f-9ad9-7b4319056359": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066635738s
Feb  9 14:16:48.016: INFO: Pod "downwardapi-volume-71ef05e2-2a02-4e3f-9ad9-7b4319056359": Phase="Pending", Reason="", readiness=false. Elapsed: 4.128102959s
Feb  9 14:16:50.026: INFO: Pod "downwardapi-volume-71ef05e2-2a02-4e3f-9ad9-7b4319056359": Phase="Pending", Reason="", readiness=false. Elapsed: 6.137903359s
Feb  9 14:16:52.037: INFO: Pod "downwardapi-volume-71ef05e2-2a02-4e3f-9ad9-7b4319056359": Phase="Pending", Reason="", readiness=false. Elapsed: 8.148596162s
Feb  9 14:16:54.053: INFO: Pod "downwardapi-volume-71ef05e2-2a02-4e3f-9ad9-7b4319056359": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.16430586s
STEP: Saw pod success
Feb  9 14:16:54.053: INFO: Pod "downwardapi-volume-71ef05e2-2a02-4e3f-9ad9-7b4319056359" satisfied condition "success or failure"
Feb  9 14:16:54.056: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-71ef05e2-2a02-4e3f-9ad9-7b4319056359 container client-container: 
STEP: delete the pod
Feb  9 14:16:54.195: INFO: Waiting for pod downwardapi-volume-71ef05e2-2a02-4e3f-9ad9-7b4319056359 to disappear
Feb  9 14:16:54.202: INFO: Pod downwardapi-volume-71ef05e2-2a02-4e3f-9ad9-7b4319056359 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 14:16:54.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8216" for this suite.
Feb  9 14:17:00.231: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 14:17:00.379: INFO: namespace downward-api-8216 deletion completed in 6.171674729s

• [SLOW TEST:16.643 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 14:17:00.380: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Feb  9 14:17:00.536: INFO: Waiting up to 5m0s for pod "downward-api-bd41e8be-0fb5-418d-a2a0-3860f28d011c" in namespace "downward-api-6842" to be "success or failure"
Feb  9 14:17:00.547: INFO: Pod "downward-api-bd41e8be-0fb5-418d-a2a0-3860f28d011c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.986064ms
Feb  9 14:17:02.559: INFO: Pod "downward-api-bd41e8be-0fb5-418d-a2a0-3860f28d011c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022695132s
Feb  9 14:17:04.570: INFO: Pod "downward-api-bd41e8be-0fb5-418d-a2a0-3860f28d011c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033480616s
Feb  9 14:17:06.601: INFO: Pod "downward-api-bd41e8be-0fb5-418d-a2a0-3860f28d011c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064844386s
Feb  9 14:17:08.618: INFO: Pod "downward-api-bd41e8be-0fb5-418d-a2a0-3860f28d011c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.081780743s
Feb  9 14:17:10.629: INFO: Pod "downward-api-bd41e8be-0fb5-418d-a2a0-3860f28d011c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.09322335s
STEP: Saw pod success
Feb  9 14:17:10.629: INFO: Pod "downward-api-bd41e8be-0fb5-418d-a2a0-3860f28d011c" satisfied condition "success or failure"
Feb  9 14:17:10.633: INFO: Trying to get logs from node iruya-node pod downward-api-bd41e8be-0fb5-418d-a2a0-3860f28d011c container dapi-container: 
STEP: delete the pod
Feb  9 14:17:10.736: INFO: Waiting for pod downward-api-bd41e8be-0fb5-418d-a2a0-3860f28d011c to disappear
Feb  9 14:17:10.744: INFO: Pod downward-api-bd41e8be-0fb5-418d-a2a0-3860f28d011c no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 14:17:10.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6842" for this suite.
Feb  9 14:17:16.773: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 14:17:16.965: INFO: namespace downward-api-6842 deletion completed in 6.216048272s

• [SLOW TEST:16.585 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 14:17:16.966: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-731484c7-dac2-49fd-a4d7-4b40a45a8d24
STEP: Creating a pod to test consume secrets
Feb  9 14:17:17.110: INFO: Waiting up to 5m0s for pod "pod-secrets-1659bdea-081d-4f32-9c30-3d5c15f59837" in namespace "secrets-6663" to be "success or failure"
Feb  9 14:17:17.116: INFO: Pod "pod-secrets-1659bdea-081d-4f32-9c30-3d5c15f59837": Phase="Pending", Reason="", readiness=false. Elapsed: 6.556743ms
Feb  9 14:17:19.129: INFO: Pod "pod-secrets-1659bdea-081d-4f32-9c30-3d5c15f59837": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019308681s
Feb  9 14:17:21.141: INFO: Pod "pod-secrets-1659bdea-081d-4f32-9c30-3d5c15f59837": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030614668s
Feb  9 14:17:23.147: INFO: Pod "pod-secrets-1659bdea-081d-4f32-9c30-3d5c15f59837": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03747581s
Feb  9 14:17:25.156: INFO: Pod "pod-secrets-1659bdea-081d-4f32-9c30-3d5c15f59837": Phase="Pending", Reason="", readiness=false. Elapsed: 8.045656333s
Feb  9 14:17:27.171: INFO: Pod "pod-secrets-1659bdea-081d-4f32-9c30-3d5c15f59837": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.061053477s
STEP: Saw pod success
Feb  9 14:17:27.171: INFO: Pod "pod-secrets-1659bdea-081d-4f32-9c30-3d5c15f59837" satisfied condition "success or failure"
Feb  9 14:17:27.177: INFO: Trying to get logs from node iruya-node pod pod-secrets-1659bdea-081d-4f32-9c30-3d5c15f59837 container secret-volume-test: 
STEP: delete the pod
Feb  9 14:17:27.256: INFO: Waiting for pod pod-secrets-1659bdea-081d-4f32-9c30-3d5c15f59837 to disappear
Feb  9 14:17:27.270: INFO: Pod pod-secrets-1659bdea-081d-4f32-9c30-3d5c15f59837 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 14:17:27.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6663" for this suite.
Feb  9 14:17:33.338: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 14:17:33.468: INFO: namespace secrets-6663 deletion completed in 6.193830097s

• [SLOW TEST:16.502 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 14:17:33.468: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name projected-secret-test-af62d3ef-6379-4bcb-a58e-7ce784362173
STEP: Creating a pod to test consume secrets
Feb  9 14:17:33.699: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-39d5861b-ebf7-426c-bbec-c100a6d5b8d6" in namespace "projected-2441" to be "success or failure"
Feb  9 14:17:33.735: INFO: Pod "pod-projected-secrets-39d5861b-ebf7-426c-bbec-c100a6d5b8d6": Phase="Pending", Reason="", readiness=false. Elapsed: 36.109658ms
Feb  9 14:17:35.746: INFO: Pod "pod-projected-secrets-39d5861b-ebf7-426c-bbec-c100a6d5b8d6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046738999s
Feb  9 14:17:37.757: INFO: Pod "pod-projected-secrets-39d5861b-ebf7-426c-bbec-c100a6d5b8d6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057225743s
Feb  9 14:17:39.767: INFO: Pod "pod-projected-secrets-39d5861b-ebf7-426c-bbec-c100a6d5b8d6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.067770584s
Feb  9 14:17:41.785: INFO: Pod "pod-projected-secrets-39d5861b-ebf7-426c-bbec-c100a6d5b8d6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.085500354s
Feb  9 14:17:43.804: INFO: Pod "pod-projected-secrets-39d5861b-ebf7-426c-bbec-c100a6d5b8d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.104339771s
STEP: Saw pod success
Feb  9 14:17:43.804: INFO: Pod "pod-projected-secrets-39d5861b-ebf7-426c-bbec-c100a6d5b8d6" satisfied condition "success or failure"
Feb  9 14:17:43.809: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-39d5861b-ebf7-426c-bbec-c100a6d5b8d6 container secret-volume-test: 
STEP: delete the pod
Feb  9 14:17:43.919: INFO: Waiting for pod pod-projected-secrets-39d5861b-ebf7-426c-bbec-c100a6d5b8d6 to disappear
Feb  9 14:17:43.935: INFO: Pod pod-projected-secrets-39d5861b-ebf7-426c-bbec-c100a6d5b8d6 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 14:17:43.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2441" for this suite.
Feb  9 14:17:50.054: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 14:17:50.176: INFO: namespace projected-2441 deletion completed in 6.233199674s

• [SLOW TEST:16.707 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 14:17:50.176: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Feb  9 14:17:50.262: INFO: Waiting up to 5m0s for pod "downward-api-1bb0d50e-175d-45ae-84f0-d90be6c4a4a8" in namespace "downward-api-2031" to be "success or failure"
Feb  9 14:17:50.283: INFO: Pod "downward-api-1bb0d50e-175d-45ae-84f0-d90be6c4a4a8": Phase="Pending", Reason="", readiness=false. Elapsed: 20.867526ms
Feb  9 14:17:52.297: INFO: Pod "downward-api-1bb0d50e-175d-45ae-84f0-d90be6c4a4a8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035509362s
Feb  9 14:17:54.309: INFO: Pod "downward-api-1bb0d50e-175d-45ae-84f0-d90be6c4a4a8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047398849s
Feb  9 14:17:56.332: INFO: Pod "downward-api-1bb0d50e-175d-45ae-84f0-d90be6c4a4a8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.070270099s
Feb  9 14:17:58.344: INFO: Pod "downward-api-1bb0d50e-175d-45ae-84f0-d90be6c4a4a8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.082053539s
Feb  9 14:18:00.366: INFO: Pod "downward-api-1bb0d50e-175d-45ae-84f0-d90be6c4a4a8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.103715749s
STEP: Saw pod success
Feb  9 14:18:00.366: INFO: Pod "downward-api-1bb0d50e-175d-45ae-84f0-d90be6c4a4a8" satisfied condition "success or failure"
Feb  9 14:18:00.372: INFO: Trying to get logs from node iruya-node pod downward-api-1bb0d50e-175d-45ae-84f0-d90be6c4a4a8 container dapi-container: 
STEP: delete the pod
Feb  9 14:18:00.424: INFO: Waiting for pod downward-api-1bb0d50e-175d-45ae-84f0-d90be6c4a4a8 to disappear
Feb  9 14:18:00.432: INFO: Pod downward-api-1bb0d50e-175d-45ae-84f0-d90be6c4a4a8 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 14:18:00.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2031" for this suite.
Feb  9 14:18:06.464: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 14:18:06.622: INFO: namespace downward-api-2031 deletion completed in 6.183317815s

• [SLOW TEST:16.446 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 14:18:06.624: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 14:18:13.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-149" for this suite.
Feb  9 14:18:19.076: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 14:18:19.214: INFO: namespace namespaces-149 deletion completed in 6.15946463s
STEP: Destroying namespace "nsdeletetest-5377" for this suite.
Feb  9 14:18:19.218: INFO: Namespace nsdeletetest-5377 was already deleted
STEP: Destroying namespace "nsdeletetest-719" for this suite.
Feb  9 14:18:25.257: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 14:18:25.410: INFO: namespace nsdeletetest-719 deletion completed in 6.192406329s

• [SLOW TEST:18.787 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 14:18:25.411: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  9 14:18:25.492: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-2294'
Feb  9 14:18:27.803: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb  9 14:18:27.804: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617
Feb  9 14:18:27.889: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-2294'
Feb  9 14:18:28.066: INFO: stderr: ""
Feb  9 14:18:28.067: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 14:18:28.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2294" for this suite.
Feb  9 14:18:34.095: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 14:18:34.207: INFO: namespace kubectl-2294 deletion completed in 6.1357217s

• [SLOW TEST:8.795 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image when restart is OnFailure  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 14:18:34.207: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
Feb  9 14:18:34.287: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Registering the sample API server.
Feb  9 14:18:35.485: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Feb  9 14:18:37.829: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716854715, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716854715, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716854715, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716854715, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  9 14:18:39.874: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716854715, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716854715, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716854715, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716854715, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  9 14:18:41.839: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716854715, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716854715, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716854715, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716854715, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  9 14:18:43.840: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716854715, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716854715, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716854715, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716854715, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  9 14:18:45.839: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716854715, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716854715, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716854715, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716854715, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  9 14:18:52.733: INFO: Waited 4.888354213s for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 14:18:53.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-4163" for this suite.
Feb  9 14:18:59.381: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 14:18:59.521: INFO: namespace aggregator-4163 deletion completed in 6.165254717s

• [SLOW TEST:25.314 seconds]
[sig-api-machinery] Aggregator
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 14:18:59.521: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  9 14:18:59.641: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Feb  9 14:18:59.829: INFO: stderr: ""
Feb  9 14:18:59.829: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T16:55:20Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.1\", GitCommit:\"4485c6f18cee9a5d3c3b4e523bd27972b1b53892\", GitTreeState:\"clean\", BuildDate:\"2019-07-18T09:09:21Z\", GoVersion:\"go1.12.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 14:18:59.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9880" for this suite.
Feb  9 14:19:05.942: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 14:19:06.076: INFO: namespace kubectl-9880 deletion completed in 6.233734793s

• [SLOW TEST:6.554 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl version
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check is all data is printed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 14:19:06.076: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Feb  9 14:19:16.710: INFO: Successfully updated pod "pod-update-activedeadlineseconds-10a638ca-9141-47fb-97a6-f5a3beee7cd4"
Feb  9 14:19:16.710: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-10a638ca-9141-47fb-97a6-f5a3beee7cd4" in namespace "pods-538" to be "terminated due to deadline exceeded"
Feb  9 14:19:16.778: INFO: Pod "pod-update-activedeadlineseconds-10a638ca-9141-47fb-97a6-f5a3beee7cd4": Phase="Running", Reason="", readiness=true. Elapsed: 67.334311ms
Feb  9 14:19:18.791: INFO: Pod "pod-update-activedeadlineseconds-10a638ca-9141-47fb-97a6-f5a3beee7cd4": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.080789962s
Feb  9 14:19:18.791: INFO: Pod "pod-update-activedeadlineseconds-10a638ca-9141-47fb-97a6-f5a3beee7cd4" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 14:19:18.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-538" for this suite.
Feb  9 14:19:24.851: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 14:19:25.094: INFO: namespace pods-538 deletion completed in 6.286477423s

• [SLOW TEST:19.019 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 14:19:25.095: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Feb  9 14:19:25.259: INFO: namespace kubectl-6926
Feb  9 14:19:25.259: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6926'
Feb  9 14:19:25.655: INFO: stderr: ""
Feb  9 14:19:25.655: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Feb  9 14:19:26.663: INFO: Selector matched 1 pods for map[app:redis]
Feb  9 14:19:26.663: INFO: Found 0 / 1
Feb  9 14:19:27.665: INFO: Selector matched 1 pods for map[app:redis]
Feb  9 14:19:27.665: INFO: Found 0 / 1
Feb  9 14:19:28.677: INFO: Selector matched 1 pods for map[app:redis]
Feb  9 14:19:28.677: INFO: Found 0 / 1
Feb  9 14:19:29.666: INFO: Selector matched 1 pods for map[app:redis]
Feb  9 14:19:29.667: INFO: Found 0 / 1
Feb  9 14:19:30.667: INFO: Selector matched 1 pods for map[app:redis]
Feb  9 14:19:30.667: INFO: Found 0 / 1
Feb  9 14:19:31.677: INFO: Selector matched 1 pods for map[app:redis]
Feb  9 14:19:31.677: INFO: Found 0 / 1
Feb  9 14:19:32.680: INFO: Selector matched 1 pods for map[app:redis]
Feb  9 14:19:32.681: INFO: Found 0 / 1
Feb  9 14:19:33.665: INFO: Selector matched 1 pods for map[app:redis]
Feb  9 14:19:33.665: INFO: Found 0 / 1
Feb  9 14:19:34.664: INFO: Selector matched 1 pods for map[app:redis]
Feb  9 14:19:34.664: INFO: Found 1 / 1
Feb  9 14:19:34.664: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Feb  9 14:19:34.668: INFO: Selector matched 1 pods for map[app:redis]
Feb  9 14:19:34.668: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb  9 14:19:34.668: INFO: wait on redis-master startup in kubectl-6926 
Feb  9 14:19:34.668: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-qwn8g redis-master --namespace=kubectl-6926'
Feb  9 14:19:34.816: INFO: stderr: ""
Feb  9 14:19:34.816: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 09 Feb 14:19:32.835 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 09 Feb 14:19:32.835 # Server started, Redis version 3.2.12\n1:M 09 Feb 14:19:32.837 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 09 Feb 14:19:32.837 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Feb  9 14:19:34.816: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-6926'
Feb  9 14:19:34.997: INFO: stderr: ""
Feb  9 14:19:34.997: INFO: stdout: "service/rm2 exposed\n"
Feb  9 14:19:35.082: INFO: Service rm2 in namespace kubectl-6926 found.
STEP: exposing service
Feb  9 14:19:37.093: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-6926'
Feb  9 14:19:37.272: INFO: stderr: ""
Feb  9 14:19:37.273: INFO: stdout: "service/rm3 exposed\n"
Feb  9 14:19:37.280: INFO: Service rm3 in namespace kubectl-6926 found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 14:19:39.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6926" for this suite.
Feb  9 14:20:03.373: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 14:20:03.483: INFO: namespace kubectl-6926 deletion completed in 24.185554885s

• [SLOW TEST:38.388 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 14:20:03.484: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  9 14:20:29.664: INFO: Container started at 2020-02-09 14:20:11 +0000 UTC, pod became ready at 2020-02-09 14:20:28 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 14:20:29.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-7698" for this suite.
Feb  9 14:20:51.716: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 14:20:51.876: INFO: namespace container-probe-7698 deletion completed in 22.204049408s

• [SLOW TEST:48.392 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 14:20:51.877: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-8c0d8174-26af-47f6-8faa-729ebce94dae
STEP: Creating a pod to test consume configMaps
Feb  9 14:20:52.013: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-eb9b7ead-a115-49e2-996d-b46560819677" in namespace "projected-4080" to be "success or failure"
Feb  9 14:20:52.082: INFO: Pod "pod-projected-configmaps-eb9b7ead-a115-49e2-996d-b46560819677": Phase="Pending", Reason="", readiness=false. Elapsed: 68.631915ms
Feb  9 14:20:54.094: INFO: Pod "pod-projected-configmaps-eb9b7ead-a115-49e2-996d-b46560819677": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080903228s
Feb  9 14:20:56.102: INFO: Pod "pod-projected-configmaps-eb9b7ead-a115-49e2-996d-b46560819677": Phase="Pending", Reason="", readiness=false. Elapsed: 4.089306411s
Feb  9 14:20:58.112: INFO: Pod "pod-projected-configmaps-eb9b7ead-a115-49e2-996d-b46560819677": Phase="Pending", Reason="", readiness=false. Elapsed: 6.098973538s
Feb  9 14:21:00.119: INFO: Pod "pod-projected-configmaps-eb9b7ead-a115-49e2-996d-b46560819677": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.105821161s
STEP: Saw pod success
Feb  9 14:21:00.119: INFO: Pod "pod-projected-configmaps-eb9b7ead-a115-49e2-996d-b46560819677" satisfied condition "success or failure"
Feb  9 14:21:00.122: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-eb9b7ead-a115-49e2-996d-b46560819677 container projected-configmap-volume-test: 
STEP: delete the pod
Feb  9 14:21:00.208: INFO: Waiting for pod pod-projected-configmaps-eb9b7ead-a115-49e2-996d-b46560819677 to disappear
Feb  9 14:21:00.216: INFO: Pod pod-projected-configmaps-eb9b7ead-a115-49e2-996d-b46560819677 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 14:21:00.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4080" for this suite.
Feb  9 14:21:06.339: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 14:21:06.445: INFO: namespace projected-4080 deletion completed in 6.217641403s

• [SLOW TEST:14.568 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 14:21:06.445: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  9 14:21:06.600: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-4722'
Feb  9 14:21:06.770: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb  9 14:21:06.771: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562
Feb  9 14:21:10.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-4722'
Feb  9 14:21:11.009: INFO: stderr: ""
Feb  9 14:21:11.009: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 14:21:11.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4722" for this suite.
Feb  9 14:21:33.034: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 14:21:33.164: INFO: namespace kubectl-4722 deletion completed in 22.148292938s

• [SLOW TEST:26.718 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 14:21:33.164: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 14:22:29.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-9557" for this suite.
Feb  9 14:22:35.634: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 14:22:35.828: INFO: namespace container-runtime-9557 deletion completed in 6.246491823s

• [SLOW TEST:62.664 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 14:22:35.828: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting the proxy server
Feb  9 14:22:35.945: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 14:22:36.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7355" for this suite.
Feb  9 14:22:42.112: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 14:22:42.247: INFO: namespace kubectl-7355 deletion completed in 6.159283618s

• [SLOW TEST:6.419 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 14:22:42.248: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating secret secrets-3901/secret-test-5d392308-1eed-4b18-90f5-cb5c9380dd94
STEP: Creating a pod to test consume secrets
Feb  9 14:22:42.359: INFO: Waiting up to 5m0s for pod "pod-configmaps-5e4b401d-c581-4ef3-85b4-d22a804d536c" in namespace "secrets-3901" to be "success or failure"
Feb  9 14:22:42.367: INFO: Pod "pod-configmaps-5e4b401d-c581-4ef3-85b4-d22a804d536c": Phase="Pending", Reason="", readiness=false. Elapsed: 7.922723ms
Feb  9 14:22:44.379: INFO: Pod "pod-configmaps-5e4b401d-c581-4ef3-85b4-d22a804d536c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020396359s
Feb  9 14:22:46.389: INFO: Pod "pod-configmaps-5e4b401d-c581-4ef3-85b4-d22a804d536c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029828857s
Feb  9 14:22:48.397: INFO: Pod "pod-configmaps-5e4b401d-c581-4ef3-85b4-d22a804d536c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03800575s
Feb  9 14:22:50.422: INFO: Pod "pod-configmaps-5e4b401d-c581-4ef3-85b4-d22a804d536c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.063544968s
STEP: Saw pod success
Feb  9 14:22:50.423: INFO: Pod "pod-configmaps-5e4b401d-c581-4ef3-85b4-d22a804d536c" satisfied condition "success or failure"
Feb  9 14:22:50.429: INFO: Trying to get logs from node iruya-node pod pod-configmaps-5e4b401d-c581-4ef3-85b4-d22a804d536c container env-test: 
STEP: delete the pod
Feb  9 14:22:50.584: INFO: Waiting for pod pod-configmaps-5e4b401d-c581-4ef3-85b4-d22a804d536c to disappear
Feb  9 14:22:50.591: INFO: Pod pod-configmaps-5e4b401d-c581-4ef3-85b4-d22a804d536c no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 14:22:50.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3901" for this suite.
Feb  9 14:22:56.668: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 14:22:56.766: INFO: namespace secrets-3901 deletion completed in 6.168914321s

• [SLOW TEST:14.518 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 14:22:56.766: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-f3a6d62e-5ccb-418f-a6b8-1cac7a512593
STEP: Creating a pod to test consume secrets
Feb  9 14:22:56.883: INFO: Waiting up to 5m0s for pod "pod-secrets-384020f8-1364-4e30-87fe-f4a7e9db2992" in namespace "secrets-2136" to be "success or failure"
Feb  9 14:22:56.889: INFO: Pod "pod-secrets-384020f8-1364-4e30-87fe-f4a7e9db2992": Phase="Pending", Reason="", readiness=false. Elapsed: 6.216219ms
Feb  9 14:22:58.902: INFO: Pod "pod-secrets-384020f8-1364-4e30-87fe-f4a7e9db2992": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019199949s
Feb  9 14:23:00.916: INFO: Pod "pod-secrets-384020f8-1364-4e30-87fe-f4a7e9db2992": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032905886s
Feb  9 14:23:02.930: INFO: Pod "pod-secrets-384020f8-1364-4e30-87fe-f4a7e9db2992": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046918271s
Feb  9 14:23:04.942: INFO: Pod "pod-secrets-384020f8-1364-4e30-87fe-f4a7e9db2992": Phase="Pending", Reason="", readiness=false. Elapsed: 8.058478929s
Feb  9 14:23:06.956: INFO: Pod "pod-secrets-384020f8-1364-4e30-87fe-f4a7e9db2992": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.073045063s
STEP: Saw pod success
Feb  9 14:23:06.956: INFO: Pod "pod-secrets-384020f8-1364-4e30-87fe-f4a7e9db2992" satisfied condition "success or failure"
Feb  9 14:23:06.961: INFO: Trying to get logs from node iruya-node pod pod-secrets-384020f8-1364-4e30-87fe-f4a7e9db2992 container secret-volume-test: 
STEP: delete the pod
Feb  9 14:23:07.194: INFO: Waiting for pod pod-secrets-384020f8-1364-4e30-87fe-f4a7e9db2992 to disappear
Feb  9 14:23:07.206: INFO: Pod pod-secrets-384020f8-1364-4e30-87fe-f4a7e9db2992 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 14:23:07.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2136" for this suite.
Feb  9 14:23:13.236: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 14:23:13.384: INFO: namespace secrets-2136 deletion completed in 6.170796588s

• [SLOW TEST:16.618 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 14:23:13.385: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-08c87782-a7bd-4cca-ac0d-fe5fe40ce053
STEP: Creating configMap with name cm-test-opt-upd-ac6eb040-0256-4391-9013-bc6f3c062c34
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-08c87782-a7bd-4cca-ac0d-fe5fe40ce053
STEP: Updating configmap cm-test-opt-upd-ac6eb040-0256-4391-9013-bc6f3c062c34
STEP: Creating configMap with name cm-test-opt-create-4a22fe93-b29c-44f6-88ed-75e43069bef3
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 14:23:28.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1991" for this suite.
Feb  9 14:23:50.180: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 14:23:50.546: INFO: namespace projected-1991 deletion completed in 22.466931537s

• [SLOW TEST:37.162 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 14:23:50.547: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Feb  9 14:23:50.656: INFO: Waiting up to 5m0s for pod "pod-6cce40aa-2d73-4d63-872a-b6520214a219" in namespace "emptydir-959" to be "success or failure"
Feb  9 14:23:50.680: INFO: Pod "pod-6cce40aa-2d73-4d63-872a-b6520214a219": Phase="Pending", Reason="", readiness=false. Elapsed: 23.807428ms
Feb  9 14:23:52.699: INFO: Pod "pod-6cce40aa-2d73-4d63-872a-b6520214a219": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042463025s
Feb  9 14:23:54.707: INFO: Pod "pod-6cce40aa-2d73-4d63-872a-b6520214a219": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051216525s
Feb  9 14:23:56.716: INFO: Pod "pod-6cce40aa-2d73-4d63-872a-b6520214a219": Phase="Pending", Reason="", readiness=false. Elapsed: 6.060142701s
Feb  9 14:23:58.727: INFO: Pod "pod-6cce40aa-2d73-4d63-872a-b6520214a219": Phase="Pending", Reason="", readiness=false. Elapsed: 8.07096396s
Feb  9 14:24:00.736: INFO: Pod "pod-6cce40aa-2d73-4d63-872a-b6520214a219": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.079337699s
STEP: Saw pod success
Feb  9 14:24:00.736: INFO: Pod "pod-6cce40aa-2d73-4d63-872a-b6520214a219" satisfied condition "success or failure"
Feb  9 14:24:00.739: INFO: Trying to get logs from node iruya-node pod pod-6cce40aa-2d73-4d63-872a-b6520214a219 container test-container: 
STEP: delete the pod
Feb  9 14:24:00.953: INFO: Waiting for pod pod-6cce40aa-2d73-4d63-872a-b6520214a219 to disappear
Feb  9 14:24:00.958: INFO: Pod pod-6cce40aa-2d73-4d63-872a-b6520214a219 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 14:24:00.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-959" for this suite.
Feb  9 14:24:06.982: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 14:24:07.090: INFO: namespace emptydir-959 deletion completed in 6.128383908s

• [SLOW TEST:16.543 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 14:24:07.091: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-7ee08056-5a68-421a-a96b-46a5906642bc
STEP: Creating a pod to test consume configMaps
Feb  9 14:24:07.259: INFO: Waiting up to 5m0s for pod "pod-configmaps-3d23f224-d7cc-43c6-9bac-66fab98cf04a" in namespace "configmap-1658" to be "success or failure"
Feb  9 14:24:07.264: INFO: Pod "pod-configmaps-3d23f224-d7cc-43c6-9bac-66fab98cf04a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.989859ms
Feb  9 14:24:09.274: INFO: Pod "pod-configmaps-3d23f224-d7cc-43c6-9bac-66fab98cf04a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014411877s
Feb  9 14:24:11.281: INFO: Pod "pod-configmaps-3d23f224-d7cc-43c6-9bac-66fab98cf04a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021015358s
Feb  9 14:24:13.290: INFO: Pod "pod-configmaps-3d23f224-d7cc-43c6-9bac-66fab98cf04a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.030802764s
Feb  9 14:24:15.305: INFO: Pod "pod-configmaps-3d23f224-d7cc-43c6-9bac-66fab98cf04a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.045206428s
Feb  9 14:24:17.315: INFO: Pod "pod-configmaps-3d23f224-d7cc-43c6-9bac-66fab98cf04a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.05513266s
STEP: Saw pod success
Feb  9 14:24:17.315: INFO: Pod "pod-configmaps-3d23f224-d7cc-43c6-9bac-66fab98cf04a" satisfied condition "success or failure"
Feb  9 14:24:17.319: INFO: Trying to get logs from node iruya-node pod pod-configmaps-3d23f224-d7cc-43c6-9bac-66fab98cf04a container configmap-volume-test: 
STEP: delete the pod
Feb  9 14:24:17.373: INFO: Waiting for pod pod-configmaps-3d23f224-d7cc-43c6-9bac-66fab98cf04a to disappear
Feb  9 14:24:17.436: INFO: Pod pod-configmaps-3d23f224-d7cc-43c6-9bac-66fab98cf04a no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 14:24:17.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1658" for this suite.
Feb  9 14:24:23.490: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 14:24:23.650: INFO: namespace configmap-1658 deletion completed in 6.206459644s

• [SLOW TEST:16.560 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 14:24:23.651: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  9 14:24:23.726: INFO: Creating deployment "nginx-deployment"
Feb  9 14:24:23.733: INFO: Waiting for observed generation 1
Feb  9 14:24:26.011: INFO: Waiting for all required pods to come up
Feb  9 14:24:27.123: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Feb  9 14:24:53.896: INFO: Waiting for deployment "nginx-deployment" to complete
Feb  9 14:24:53.912: INFO: Updating deployment "nginx-deployment" with a non-existent image
Feb  9 14:24:53.934: INFO: Updating deployment nginx-deployment
Feb  9 14:24:53.934: INFO: Waiting for observed generation 2
Feb  9 14:24:56.478: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Feb  9 14:24:57.119: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Feb  9 14:24:57.150: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Feb  9 14:24:57.164: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Feb  9 14:24:57.164: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Feb  9 14:24:57.167: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Feb  9 14:24:57.172: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
Feb  9 14:24:57.172: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
Feb  9 14:24:57.182: INFO: Updating deployment nginx-deployment
Feb  9 14:24:57.183: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
Feb  9 14:24:57.397: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Feb  9 14:24:58.544: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Feb  9 14:24:59.085: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-1134,SelfLink:/apis/apps/v1/namespaces/deployment-1134/deployments/nginx-deployment,UID:1f14f8f1-7a53-4ab2-96fd-30406c94edda,ResourceVersion:23706324,Generation:3,CreationTimestamp:2020-02-09 14:24:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-02-09 14:24:54 +0000 UTC 2020-02-09 14:24:23 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.} {Available False 2020-02-09 14:24:57 +0000 UTC 2020-02-09 14:24:57 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},}

Feb  9 14:24:59.801: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-1134,SelfLink:/apis/apps/v1/namespaces/deployment-1134/replicasets/nginx-deployment-55fb7cb77f,UID:446a0f5e-e585-467c-9680-0e191cb84e0b,ResourceVersion:23706349,Generation:3,CreationTimestamp:2020-02-09 14:24:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 1f14f8f1-7a53-4ab2-96fd-30406c94edda 0xc001fdef37 0xc001fdef38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb  9 14:24:59.801: INFO: All old ReplicaSets of Deployment "nginx-deployment":
Feb  9 14:24:59.802: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-1134,SelfLink:/apis/apps/v1/namespaces/deployment-1134/replicasets/nginx-deployment-7b8c6f4498,UID:6cd70efe-b4af-45ce-92ff-2c108d36b54d,ResourceVersion:23706330,Generation:3,CreationTimestamp:2020-02-09 14:24:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 1f14f8f1-7a53-4ab2-96fd-30406c94edda 0xc001fdf027 0xc001fdf028}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
Feb  9 14:25:00.847: INFO: Pod "nginx-deployment-55fb7cb77f-6l6wt" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-6l6wt,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1134,SelfLink:/api/v1/namespaces/deployment-1134/pods/nginx-deployment-55fb7cb77f-6l6wt,UID:cc885c17-7891-4fe5-813b-9aeecd097c42,ResourceVersion:23706347,Generation:0,CreationTimestamp:2020-02-09 14:24:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 446a0f5e-e585-467c-9680-0e191cb84e0b 0xc001fdfa07 0xc001fdfa08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ms7mr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ms7mr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-ms7mr true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001fdfa90} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001fdfab0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:24:58 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  9 14:25:00.847: INFO: Pod "nginx-deployment-55fb7cb77f-6n4tn" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-6n4tn,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1134,SelfLink:/api/v1/namespaces/deployment-1134/pods/nginx-deployment-55fb7cb77f-6n4tn,UID:56a7719b-c5e5-4b54-8572-beb1d2bfd4ea,ResourceVersion:23706340,Generation:0,CreationTimestamp:2020-02-09 14:24:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 446a0f5e-e585-467c-9680-0e191cb84e0b 0xc001fdfb37 0xc001fdfb38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ms7mr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ms7mr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-ms7mr true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001fdfbc0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001fdfbe0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:24:58 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  9 14:25:00.848: INFO: Pod "nginx-deployment-55fb7cb77f-9m7vq" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-9m7vq,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1134,SelfLink:/api/v1/namespaces/deployment-1134/pods/nginx-deployment-55fb7cb77f-9m7vq,UID:abacdc6b-af58-4086-b8f4-6bdec6ca2bb6,ResourceVersion:23706285,Generation:0,CreationTimestamp:2020-02-09 14:24:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 446a0f5e-e585-467c-9680-0e191cb84e0b 0xc001fdfc67 0xc001fdfc68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ms7mr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ms7mr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-ms7mr true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001fdfd00} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001fdfd20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:24:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:24:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:24:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:24:54 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-09 14:24:54 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  9 14:25:00.848: INFO: Pod "nginx-deployment-55fb7cb77f-bv9wk" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-bv9wk,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1134,SelfLink:/api/v1/namespaces/deployment-1134/pods/nginx-deployment-55fb7cb77f-bv9wk,UID:90002eaf-72dd-4333-ac3a-f47df0f00c7e,ResourceVersion:23706284,Generation:0,CreationTimestamp:2020-02-09 14:24:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 446a0f5e-e585-467c-9680-0e191cb84e0b 0xc0027d8047 0xc0027d8048}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ms7mr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ms7mr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-ms7mr true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0027d80b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0027d80d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:24:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:24:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:24:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:24:54 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-02-09 14:24:54 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  9 14:25:00.848: INFO: Pod "nginx-deployment-55fb7cb77f-cgzbd" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-cgzbd,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1134,SelfLink:/api/v1/namespaces/deployment-1134/pods/nginx-deployment-55fb7cb77f-cgzbd,UID:ef0d6953-e351-4889-927c-2254f04f15e0,ResourceVersion:23706343,Generation:0,CreationTimestamp:2020-02-09 14:24:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 446a0f5e-e585-467c-9680-0e191cb84e0b 0xc0027d81a7 0xc0027d81a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ms7mr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ms7mr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-ms7mr true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0027d8210} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0027d8230}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:24:58 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  9 14:25:00.848: INFO: Pod "nginx-deployment-55fb7cb77f-h6hmc" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-h6hmc,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1134,SelfLink:/api/v1/namespaces/deployment-1134/pods/nginx-deployment-55fb7cb77f-h6hmc,UID:ff9948c7-42ff-432b-89d0-91c42e3d236f,ResourceVersion:23706332,Generation:0,CreationTimestamp:2020-02-09 14:24:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 446a0f5e-e585-467c-9680-0e191cb84e0b 0xc0027d82b7 0xc0027d82b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ms7mr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ms7mr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-ms7mr true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0027d8330} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0027d8350}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:24:58 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  9 14:25:00.849: INFO: Pod "nginx-deployment-55fb7cb77f-l5r5z" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-l5r5z,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1134,SelfLink:/api/v1/namespaces/deployment-1134/pods/nginx-deployment-55fb7cb77f-l5r5z,UID:1daa64ac-aa6d-4f2d-9e6a-12d8a1115e74,ResourceVersion:23706352,Generation:0,CreationTimestamp:2020-02-09 14:24:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 446a0f5e-e585-467c-9680-0e191cb84e0b 0xc0027d83d7 0xc0027d83d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ms7mr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ms7mr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-ms7mr true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0027d8440} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0027d8460}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:24:58 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:24:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:24:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:24:58 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-02-09 14:24:58 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  9 14:25:00.849: INFO: Pod "nginx-deployment-55fb7cb77f-nqdkj" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-nqdkj,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1134,SelfLink:/api/v1/namespaces/deployment-1134/pods/nginx-deployment-55fb7cb77f-nqdkj,UID:c1012541-b1e4-4c2e-a3ca-2a802be10724,ResourceVersion:23706270,Generation:0,CreationTimestamp:2020-02-09 14:24:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 446a0f5e-e585-467c-9680-0e191cb84e0b 0xc0027d8537 0xc0027d8538}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ms7mr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ms7mr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-ms7mr true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0027d85b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0027d85d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:24:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:24:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:24:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:24:54 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-09 14:24:54 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  9 14:25:00.849: INFO: Pod "nginx-deployment-55fb7cb77f-rl7fd" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-rl7fd,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1134,SelfLink:/api/v1/namespaces/deployment-1134/pods/nginx-deployment-55fb7cb77f-rl7fd,UID:63ac16c0-7ed1-4531-bccd-9a92cf6eef8d,ResourceVersion:23706256,Generation:0,CreationTimestamp:2020-02-09 14:24:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 446a0f5e-e585-467c-9680-0e191cb84e0b 0xc0027d86a7 0xc0027d86a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ms7mr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ms7mr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-ms7mr true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0027d8720} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0027d8740}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:24:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:24:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:24:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:24:54 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-09 14:24:54 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  9 14:25:00.849: INFO: Pod "nginx-deployment-55fb7cb77f-s6fw7" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-s6fw7,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1134,SelfLink:/api/v1/namespaces/deployment-1134/pods/nginx-deployment-55fb7cb77f-s6fw7,UID:e4ef697c-3426-4451-80b8-e41e11b99895,ResourceVersion:23706331,Generation:0,CreationTimestamp:2020-02-09 14:24:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 446a0f5e-e585-467c-9680-0e191cb84e0b 0xc0027d8817 0xc0027d8818}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ms7mr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ms7mr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-ms7mr true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0027d8880} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0027d88a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:24:58 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  9 14:25:00.849: INFO: Pod "nginx-deployment-55fb7cb77f-v2hvw" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-v2hvw,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1134,SelfLink:/api/v1/namespaces/deployment-1134/pods/nginx-deployment-55fb7cb77f-v2hvw,UID:419358e7-4f18-4d19-af3f-9d32ce89cff9,ResourceVersion:23706342,Generation:0,CreationTimestamp:2020-02-09 14:24:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 446a0f5e-e585-467c-9680-0e191cb84e0b 0xc0027d8927 0xc0027d8928}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ms7mr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ms7mr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-ms7mr true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0027d8990} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0027d89d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:24:58 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  9 14:25:00.850: INFO: Pod "nginx-deployment-55fb7cb77f-vdg6d" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-vdg6d,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1134,SelfLink:/api/v1/namespaces/deployment-1134/pods/nginx-deployment-55fb7cb77f-vdg6d,UID:91e982cf-cf97-45da-9680-e2e1e75f1625,ResourceVersion:23706261,Generation:0,CreationTimestamp:2020-02-09 14:24:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 446a0f5e-e585-467c-9680-0e191cb84e0b 0xc0027d8a57 0xc0027d8a58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ms7mr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ms7mr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-ms7mr true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0027d8ad0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0027d8af0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:24:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:24:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:24:54 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:24:54 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-02-09 14:24:54 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  9 14:25:00.850: INFO: Pod "nginx-deployment-55fb7cb77f-zhq7l" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-zhq7l,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1134,SelfLink:/api/v1/namespaces/deployment-1134/pods/nginx-deployment-55fb7cb77f-zhq7l,UID:8e9f4f13-0737-4e20-a3be-624f10fd0f6c,ResourceVersion:23706344,Generation:0,CreationTimestamp:2020-02-09 14:24:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 446a0f5e-e585-467c-9680-0e191cb84e0b 0xc0027d8be7 0xc0027d8be8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ms7mr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ms7mr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-ms7mr true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0027d8c60} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0027d8c80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:24:58 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  9 14:25:00.850: INFO: Pod "nginx-deployment-7b8c6f4498-2s7zs" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-2s7zs,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1134,SelfLink:/api/v1/namespaces/deployment-1134/pods/nginx-deployment-7b8c6f4498-2s7zs,UID:8ee3e228-9e7a-42e6-8f4e-26894c32f61e,ResourceVersion:23706351,Generation:0,CreationTimestamp:2020-02-09 14:24:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 6cd70efe-b4af-45ce-92ff-2c108d36b54d 0xc0027d8d07 0xc0027d8d08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ms7mr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ms7mr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ms7mr true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0027d8d80} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0027d8da0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:24:58 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:24:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:24:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:24:57 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-02-09 14:24:58 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  9 14:25:00.850: INFO: Pod "nginx-deployment-7b8c6f4498-678zt" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-678zt,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1134,SelfLink:/api/v1/namespaces/deployment-1134/pods/nginx-deployment-7b8c6f4498-678zt,UID:9e45f681-24ec-41ff-8d74-f5cc62605ef6,ResourceVersion:23706221,Generation:0,CreationTimestamp:2020-02-09 14:24:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 6cd70efe-b4af-45ce-92ff-2c108d36b54d 0xc0027d8e67 0xc0027d8e68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ms7mr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ms7mr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ms7mr true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0027d8ee0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0027d8f00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:24:23 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:24:52 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:24:52 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:24:23 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-02-09 14:24:23 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-09 14:24:51 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://fd38df04c54e8e33d7896dc21f82c8687aca4842f07eb7321e2bdbd5f71b5c4e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  9 14:25:00.851: INFO: Pod "nginx-deployment-7b8c6f4498-79t5g" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-79t5g,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1134,SelfLink:/api/v1/namespaces/deployment-1134/pods/nginx-deployment-7b8c6f4498-79t5g,UID:319f1e29-cbf3-4ca6-819e-300f32efb49b,ResourceVersion:23706201,Generation:0,CreationTimestamp:2020-02-09 14:24:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 6cd70efe-b4af-45ce-92ff-2c108d36b54d 0xc0027d8ff7 0xc0027d8ff8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ms7mr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ms7mr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ms7mr true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0027d9060} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0027d9080}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:24:24 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:24:50 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:24:50 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:24:24 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.4,StartTime:2020-02-09 14:24:24 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-09 14:24:46 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://6516cecef57c2442291d8774077761a443fdd36812818c3e068ce77d0c8455ca}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  9 14:25:00.851: INFO: Pod "nginx-deployment-7b8c6f4498-b8blt" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-b8blt,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1134,SelfLink:/api/v1/namespaces/deployment-1134/pods/nginx-deployment-7b8c6f4498-b8blt,UID:5a9c78bd-43aa-46ba-b01e-9f26812ca902,ResourceVersion:23706304,Generation:0,CreationTimestamp:2020-02-09 14:24:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 6cd70efe-b4af-45ce-92ff-2c108d36b54d 0xc0027d9167 0xc0027d9168}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ms7mr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ms7mr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ms7mr true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0027d91f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0027d9210}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:24:57 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  9 14:25:00.851: INFO: Pod "nginx-deployment-7b8c6f4498-bp8lr" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-bp8lr,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1134,SelfLink:/api/v1/namespaces/deployment-1134/pods/nginx-deployment-7b8c6f4498-bp8lr,UID:269b477e-b1b4-41a1-b32c-9581c0cc614a,ResourceVersion:23706335,Generation:0,CreationTimestamp:2020-02-09 14:24:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 6cd70efe-b4af-45ce-92ff-2c108d36b54d 0xc0027d9297 0xc0027d9298}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ms7mr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ms7mr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ms7mr true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0027d9310} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0027d9340}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:24:58 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  9 14:25:00.851: INFO: Pod "nginx-deployment-7b8c6f4498-f8whw" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-f8whw,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1134,SelfLink:/api/v1/namespaces/deployment-1134/pods/nginx-deployment-7b8c6f4498-f8whw,UID:ed29efbb-6a70-4eba-889f-1644e51e506a,ResourceVersion:23706302,Generation:0,CreationTimestamp:2020-02-09 14:24:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 6cd70efe-b4af-45ce-92ff-2c108d36b54d 0xc0027d93c7 0xc0027d93c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ms7mr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ms7mr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ms7mr true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0027d9470} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0027d9490}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:24:57 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  9 14:25:00.851: INFO: Pod "nginx-deployment-7b8c6f4498-hp8h5" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-hp8h5,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1134,SelfLink:/api/v1/namespaces/deployment-1134/pods/nginx-deployment-7b8c6f4498-hp8h5,UID:fc4c9039-611c-4a2a-b71e-8a1f21d2a1df,ResourceVersion:23706359,Generation:0,CreationTimestamp:2020-02-09 14:24:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 6cd70efe-b4af-45ce-92ff-2c108d36b54d 0xc0027d9527 0xc0027d9528}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ms7mr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ms7mr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ms7mr true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0027d9590} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0027d95c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:24:58 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:24:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:24:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:24:58 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-02-09 14:24:58 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  9 14:25:00.851: INFO: Pod "nginx-deployment-7b8c6f4498-jxfz5" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-jxfz5,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1134,SelfLink:/api/v1/namespaces/deployment-1134/pods/nginx-deployment-7b8c6f4498-jxfz5,UID:54d4afb7-84ee-40c9-8fd2-8b360738a7f1,ResourceVersion:23706337,Generation:0,CreationTimestamp:2020-02-09 14:24:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 6cd70efe-b4af-45ce-92ff-2c108d36b54d 0xc0027d9687 0xc0027d9688}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ms7mr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ms7mr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ms7mr true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0027d96f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0027d9710}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:24:58 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  9 14:25:00.852: INFO: Pod "nginx-deployment-7b8c6f4498-kgrxj" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-kgrxj,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1134,SelfLink:/api/v1/namespaces/deployment-1134/pods/nginx-deployment-7b8c6f4498-kgrxj,UID:8c6c1d7a-9c0b-4ca6-92c7-7b7d08dd653b,ResourceVersion:23706191,Generation:0,CreationTimestamp:2020-02-09 14:24:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 6cd70efe-b4af-45ce-92ff-2c108d36b54d 0xc0027d9797 0xc0027d9798}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ms7mr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ms7mr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ms7mr true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0027d9800} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0027d9820}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:24:24 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:24:49 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:24:49 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:24:24 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.5,StartTime:2020-02-09 14:24:24 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-09 14:24:48 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://0645dfceec88bcd70633d4c68945a74d093f84c8048ad3ab8da23c2a3231a8bf}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  9 14:25:00.852: INFO: Pod "nginx-deployment-7b8c6f4498-kjqc9" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-kjqc9,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1134,SelfLink:/api/v1/namespaces/deployment-1134/pods/nginx-deployment-7b8c6f4498-kjqc9,UID:fffeb863-197d-4ce4-b849-d7119f728fc5,ResourceVersion:23706195,Generation:0,CreationTimestamp:2020-02-09 14:24:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 6cd70efe-b4af-45ce-92ff-2c108d36b54d 0xc0027d98f7 0xc0027d98f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ms7mr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ms7mr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ms7mr true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0027d9960} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0027d9980}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:24:24 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:24:49 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:24:49 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:24:23 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.6,StartTime:2020-02-09 14:24:24 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-09 14:24:49 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://d3e6db6d189fe7eb3941e7f42d543392913988063d2ddce6dd06b162018fc2a4}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  9 14:25:00.852: INFO: Pod "nginx-deployment-7b8c6f4498-ljj9m" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-ljj9m,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1134,SelfLink:/api/v1/namespaces/deployment-1134/pods/nginx-deployment-7b8c6f4498-ljj9m,UID:4e554251-a991-41a8-959e-7037b7fac2ca,ResourceVersion:23706227,Generation:0,CreationTimestamp:2020-02-09 14:24:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 6cd70efe-b4af-45ce-92ff-2c108d36b54d 0xc0027d9a57 0xc0027d9a58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ms7mr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ms7mr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ms7mr true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0027d9ad0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0027d9af0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:24:24 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:24:52 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:24:52 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:24:23 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.5,StartTime:2020-02-09 14:24:24 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-09 14:24:51 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://96b5a498427ce589aa8fcafc20d0b8008a07f3d8b26de93b563d9bde7c5982d0}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  9 14:25:00.852: INFO: Pod "nginx-deployment-7b8c6f4498-nlkmr" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-nlkmr,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1134,SelfLink:/api/v1/namespaces/deployment-1134/pods/nginx-deployment-7b8c6f4498-nlkmr,UID:bc3b3ec1-1655-4f8b-960c-a10269e67533,ResourceVersion:23706319,Generation:0,CreationTimestamp:2020-02-09 14:24:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 6cd70efe-b4af-45ce-92ff-2c108d36b54d 0xc0027d9c17 0xc0027d9c18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ms7mr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ms7mr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ms7mr true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0027d9c90} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0027d9cb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:24:58 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  9 14:25:00.853: INFO: Pod "nginx-deployment-7b8c6f4498-nlq6h" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-nlq6h,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1134,SelfLink:/api/v1/namespaces/deployment-1134/pods/nginx-deployment-7b8c6f4498-nlq6h,UID:a6148693-5d7a-413f-a589-f76bf5fe3fd6,ResourceVersion:23706314,Generation:0,CreationTimestamp:2020-02-09 14:24:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 6cd70efe-b4af-45ce-92ff-2c108d36b54d 0xc0027d9d57 0xc0027d9d58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ms7mr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ms7mr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ms7mr true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0027d9dd0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0027d9df0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:24:58 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  9 14:25:00.853: INFO: Pod "nginx-deployment-7b8c6f4498-r7sz9" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-r7sz9,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1134,SelfLink:/api/v1/namespaces/deployment-1134/pods/nginx-deployment-7b8c6f4498-r7sz9,UID:6d7bdd9b-adca-468a-ac0b-fa2b04965ab0,ResourceVersion:23706209,Generation:0,CreationTimestamp:2020-02-09 14:24:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 6cd70efe-b4af-45ce-92ff-2c108d36b54d 0xc0027d9e77 0xc0027d9e78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ms7mr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ms7mr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ms7mr true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0027d9ef0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0027d9f10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:24:24 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:24:51 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:24:51 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:24:24 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.8,StartTime:2020-02-09 14:24:24 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-09 14:24:49 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://58e822f0407d2fe713c3b7de7609af902591e049c20b27e35b75ce6a83be70c5}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  9 14:25:00.853: INFO: Pod "nginx-deployment-7b8c6f4498-s57p8" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-s57p8,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1134,SelfLink:/api/v1/namespaces/deployment-1134/pods/nginx-deployment-7b8c6f4498-s57p8,UID:4dd3ce30-2d9d-4860-87f8-a84cfeb36f79,ResourceVersion:23706318,Generation:0,CreationTimestamp:2020-02-09 14:24:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 6cd70efe-b4af-45ce-92ff-2c108d36b54d 0xc0027d9fe7 0xc0027d9fe8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ms7mr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ms7mr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ms7mr true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002fe4070} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002fe4090}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:24:58 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  9 14:25:00.853: INFO: Pod "nginx-deployment-7b8c6f4498-tmxqw" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-tmxqw,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1134,SelfLink:/api/v1/namespaces/deployment-1134/pods/nginx-deployment-7b8c6f4498-tmxqw,UID:8903f584-d04e-4b44-b4ea-3450ac867270,ResourceVersion:23706218,Generation:0,CreationTimestamp:2020-02-09 14:24:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 6cd70efe-b4af-45ce-92ff-2c108d36b54d 0xc002fe4117 0xc002fe4118}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ms7mr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ms7mr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ms7mr true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002fe4190} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002fe41b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:24:24 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:24:52 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:24:52 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:24:24 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.3,StartTime:2020-02-09 14:24:24 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-09 14:24:51 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://e177a80b20d34bd00d847c74f2b318fac69c747e79267d5d26a5e34c76f51ed8}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  9 14:25:00.854: INFO: Pod "nginx-deployment-7b8c6f4498-vd45c" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-vd45c,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1134,SelfLink:/api/v1/namespaces/deployment-1134/pods/nginx-deployment-7b8c6f4498-vd45c,UID:f134198c-df6d-47ef-87d1-852676ba4e97,ResourceVersion:23706334,Generation:0,CreationTimestamp:2020-02-09 14:24:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 6cd70efe-b4af-45ce-92ff-2c108d36b54d 0xc002fe4287 0xc002fe4288}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ms7mr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ms7mr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ms7mr true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002fe42f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002fe4310}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:24:58 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  9 14:25:00.854: INFO: Pod "nginx-deployment-7b8c6f4498-w7w5j" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-w7w5j,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1134,SelfLink:/api/v1/namespaces/deployment-1134/pods/nginx-deployment-7b8c6f4498-w7w5j,UID:1c24c8bf-aa67-4432-a6fe-bd95703ed466,ResourceVersion:23706336,Generation:0,CreationTimestamp:2020-02-09 14:24:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 6cd70efe-b4af-45ce-92ff-2c108d36b54d 0xc002fe4397 0xc002fe4398}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ms7mr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ms7mr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ms7mr true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002fe4410} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002fe4430}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:24:58 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  9 14:25:00.854: INFO: Pod "nginx-deployment-7b8c6f4498-x4mrw" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-x4mrw,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1134,SelfLink:/api/v1/namespaces/deployment-1134/pods/nginx-deployment-7b8c6f4498-x4mrw,UID:0674a439-34c4-4f3e-9345-cafb4bf60ff1,ResourceVersion:23706333,Generation:0,CreationTimestamp:2020-02-09 14:24:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 6cd70efe-b4af-45ce-92ff-2c108d36b54d 0xc002fe44b7 0xc002fe44b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ms7mr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ms7mr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ms7mr true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002fe4530} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002fe4550}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:24:58 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  9 14:25:00.854: INFO: Pod "nginx-deployment-7b8c6f4498-zh26x" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-zh26x,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1134,SelfLink:/api/v1/namespaces/deployment-1134/pods/nginx-deployment-7b8c6f4498-zh26x,UID:468b177f-1a02-478f-b4d1-ffe8ce20d8a4,ResourceVersion:23706199,Generation:0,CreationTimestamp:2020-02-09 14:24:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 6cd70efe-b4af-45ce-92ff-2c108d36b54d 0xc002fe45d7 0xc002fe45d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-ms7mr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-ms7mr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-ms7mr true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002fe4640} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002fe4660}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:24:24 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:24:49 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:24:49 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:24:24 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.7,StartTime:2020-02-09 14:24:24 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-09 14:24:49 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://e5b898beb6c673a85818cecc72b3b71ac6064ab0c3eb0542de2fa96e38e9568a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 14:25:00.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-1134" for this suite.
Feb  9 14:25:40.829: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 14:25:44.293: INFO: namespace deployment-1134 deletion completed in 42.104120431s

• [SLOW TEST:80.642 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 14:25:44.294: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  9 14:25:49.191: INFO: Waiting up to 5m0s for pod "downwardapi-volume-beeecc44-9676-446f-9da4-521e1a9fe347" in namespace "downward-api-1892" to be "success or failure"
Feb  9 14:25:51.672: INFO: Pod "downwardapi-volume-beeecc44-9676-446f-9da4-521e1a9fe347": Phase="Pending", Reason="", readiness=false. Elapsed: 2.481112273s
Feb  9 14:25:54.938: INFO: Pod "downwardapi-volume-beeecc44-9676-446f-9da4-521e1a9fe347": Phase="Pending", Reason="", readiness=false. Elapsed: 5.747489362s
Feb  9 14:25:56.954: INFO: Pod "downwardapi-volume-beeecc44-9676-446f-9da4-521e1a9fe347": Phase="Pending", Reason="", readiness=false. Elapsed: 7.76309326s
Feb  9 14:25:58.967: INFO: Pod "downwardapi-volume-beeecc44-9676-446f-9da4-521e1a9fe347": Phase="Pending", Reason="", readiness=false. Elapsed: 9.776183299s
Feb  9 14:26:01.109: INFO: Pod "downwardapi-volume-beeecc44-9676-446f-9da4-521e1a9fe347": Phase="Pending", Reason="", readiness=false. Elapsed: 11.918563125s
Feb  9 14:26:03.223: INFO: Pod "downwardapi-volume-beeecc44-9676-446f-9da4-521e1a9fe347": Phase="Pending", Reason="", readiness=false. Elapsed: 14.032517642s
Feb  9 14:26:05.523: INFO: Pod "downwardapi-volume-beeecc44-9676-446f-9da4-521e1a9fe347": Phase="Pending", Reason="", readiness=false. Elapsed: 16.331664492s
Feb  9 14:26:07.606: INFO: Pod "downwardapi-volume-beeecc44-9676-446f-9da4-521e1a9fe347": Phase="Pending", Reason="", readiness=false. Elapsed: 18.414905574s
Feb  9 14:26:09.673: INFO: Pod "downwardapi-volume-beeecc44-9676-446f-9da4-521e1a9fe347": Phase="Pending", Reason="", readiness=false. Elapsed: 20.481589088s
Feb  9 14:26:11.680: INFO: Pod "downwardapi-volume-beeecc44-9676-446f-9da4-521e1a9fe347": Phase="Pending", Reason="", readiness=false. Elapsed: 22.489052187s
Feb  9 14:26:13.694: INFO: Pod "downwardapi-volume-beeecc44-9676-446f-9da4-521e1a9fe347": Phase="Pending", Reason="", readiness=false. Elapsed: 24.502678128s
Feb  9 14:26:15.927: INFO: Pod "downwardapi-volume-beeecc44-9676-446f-9da4-521e1a9fe347": Phase="Pending", Reason="", readiness=false. Elapsed: 26.736022837s
Feb  9 14:26:17.935: INFO: Pod "downwardapi-volume-beeecc44-9676-446f-9da4-521e1a9fe347": Phase="Pending", Reason="", readiness=false. Elapsed: 28.743587608s
Feb  9 14:26:19.943: INFO: Pod "downwardapi-volume-beeecc44-9676-446f-9da4-521e1a9fe347": Phase="Pending", Reason="", readiness=false. Elapsed: 30.751907933s
Feb  9 14:26:21.951: INFO: Pod "downwardapi-volume-beeecc44-9676-446f-9da4-521e1a9fe347": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.760353017s
STEP: Saw pod success
Feb  9 14:26:21.951: INFO: Pod "downwardapi-volume-beeecc44-9676-446f-9da4-521e1a9fe347" satisfied condition "success or failure"
Feb  9 14:26:21.957: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-beeecc44-9676-446f-9da4-521e1a9fe347 container client-container: 
STEP: delete the pod
Feb  9 14:26:22.106: INFO: Waiting for pod downwardapi-volume-beeecc44-9676-446f-9da4-521e1a9fe347 to disappear
Feb  9 14:26:22.128: INFO: Pod downwardapi-volume-beeecc44-9676-446f-9da4-521e1a9fe347 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 14:26:22.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1892" for this suite.
Feb  9 14:26:28.193: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 14:26:28.350: INFO: namespace downward-api-1892 deletion completed in 6.208470247s

• [SLOW TEST:44.057 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 14:26:28.351: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override all
Feb  9 14:26:28.456: INFO: Waiting up to 5m0s for pod "client-containers-6e720337-ea30-476c-97e7-d2a973c6cb4f" in namespace "containers-4478" to be "success or failure"
Feb  9 14:26:28.473: INFO: Pod "client-containers-6e720337-ea30-476c-97e7-d2a973c6cb4f": Phase="Pending", Reason="", readiness=false. Elapsed: 17.238617ms
Feb  9 14:26:30.484: INFO: Pod "client-containers-6e720337-ea30-476c-97e7-d2a973c6cb4f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027435244s
Feb  9 14:26:32.492: INFO: Pod "client-containers-6e720337-ea30-476c-97e7-d2a973c6cb4f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036099506s
Feb  9 14:26:34.509: INFO: Pod "client-containers-6e720337-ea30-476c-97e7-d2a973c6cb4f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052778329s
Feb  9 14:26:36.527: INFO: Pod "client-containers-6e720337-ea30-476c-97e7-d2a973c6cb4f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.071073774s
Feb  9 14:26:38.540: INFO: Pod "client-containers-6e720337-ea30-476c-97e7-d2a973c6cb4f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.083460825s
STEP: Saw pod success
Feb  9 14:26:38.540: INFO: Pod "client-containers-6e720337-ea30-476c-97e7-d2a973c6cb4f" satisfied condition "success or failure"
Feb  9 14:26:38.545: INFO: Trying to get logs from node iruya-node pod client-containers-6e720337-ea30-476c-97e7-d2a973c6cb4f container test-container: 
STEP: delete the pod
Feb  9 14:26:39.450: INFO: Waiting for pod client-containers-6e720337-ea30-476c-97e7-d2a973c6cb4f to disappear
Feb  9 14:26:39.506: INFO: Pod client-containers-6e720337-ea30-476c-97e7-d2a973c6cb4f no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 14:26:39.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-4478" for this suite.
Feb  9 14:26:45.573: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 14:26:45.772: INFO: namespace containers-4478 deletion completed in 6.253489218s

• [SLOW TEST:17.422 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 14:26:45.773: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  9 14:26:45.860: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e384bc4d-a52d-44c1-a813-ab0e6a13e830" in namespace "projected-1133" to be "success or failure"
Feb  9 14:26:45.869: INFO: Pod "downwardapi-volume-e384bc4d-a52d-44c1-a813-ab0e6a13e830": Phase="Pending", Reason="", readiness=false. Elapsed: 8.96649ms
Feb  9 14:26:47.892: INFO: Pod "downwardapi-volume-e384bc4d-a52d-44c1-a813-ab0e6a13e830": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031847495s
Feb  9 14:26:49.905: INFO: Pod "downwardapi-volume-e384bc4d-a52d-44c1-a813-ab0e6a13e830": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045158635s
Feb  9 14:26:51.916: INFO: Pod "downwardapi-volume-e384bc4d-a52d-44c1-a813-ab0e6a13e830": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056200285s
Feb  9 14:26:53.932: INFO: Pod "downwardapi-volume-e384bc4d-a52d-44c1-a813-ab0e6a13e830": Phase="Pending", Reason="", readiness=false. Elapsed: 8.071991132s
Feb  9 14:26:55.942: INFO: Pod "downwardapi-volume-e384bc4d-a52d-44c1-a813-ab0e6a13e830": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.082230039s
STEP: Saw pod success
Feb  9 14:26:55.943: INFO: Pod "downwardapi-volume-e384bc4d-a52d-44c1-a813-ab0e6a13e830" satisfied condition "success or failure"
Feb  9 14:26:55.948: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-e384bc4d-a52d-44c1-a813-ab0e6a13e830 container client-container: 
STEP: delete the pod
Feb  9 14:26:56.092: INFO: Waiting for pod downwardapi-volume-e384bc4d-a52d-44c1-a813-ab0e6a13e830 to disappear
Feb  9 14:26:56.104: INFO: Pod downwardapi-volume-e384bc4d-a52d-44c1-a813-ab0e6a13e830 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 14:26:56.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1133" for this suite.
Feb  9 14:27:02.148: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 14:27:02.260: INFO: namespace projected-1133 deletion completed in 6.149768617s

• [SLOW TEST:16.487 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 14:27:02.260: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0209 14:27:04.493005       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb  9 14:27:04.493: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 14:27:04.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-639" for this suite.
Feb  9 14:27:11.172: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 14:27:11.288: INFO: namespace gc-639 deletion completed in 6.79042011s

• [SLOW TEST:9.028 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 14:27:11.288: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Feb  9 14:27:11.352: INFO: Waiting up to 5m0s for pod "pod-74d574ce-e7fc-4de4-ad3a-b3f95cc2af91" in namespace "emptydir-4759" to be "success or failure"
Feb  9 14:27:11.367: INFO: Pod "pod-74d574ce-e7fc-4de4-ad3a-b3f95cc2af91": Phase="Pending", Reason="", readiness=false. Elapsed: 14.278237ms
Feb  9 14:27:13.376: INFO: Pod "pod-74d574ce-e7fc-4de4-ad3a-b3f95cc2af91": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023525775s
Feb  9 14:27:15.387: INFO: Pod "pod-74d574ce-e7fc-4de4-ad3a-b3f95cc2af91": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034980983s
Feb  9 14:27:17.401: INFO: Pod "pod-74d574ce-e7fc-4de4-ad3a-b3f95cc2af91": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04873166s
Feb  9 14:27:19.412: INFO: Pod "pod-74d574ce-e7fc-4de4-ad3a-b3f95cc2af91": Phase="Pending", Reason="", readiness=false. Elapsed: 8.059768681s
Feb  9 14:27:21.422: INFO: Pod "pod-74d574ce-e7fc-4de4-ad3a-b3f95cc2af91": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.069671092s
STEP: Saw pod success
Feb  9 14:27:21.422: INFO: Pod "pod-74d574ce-e7fc-4de4-ad3a-b3f95cc2af91" satisfied condition "success or failure"
Feb  9 14:27:21.426: INFO: Trying to get logs from node iruya-node pod pod-74d574ce-e7fc-4de4-ad3a-b3f95cc2af91 container test-container: 
STEP: delete the pod
Feb  9 14:27:21.595: INFO: Waiting for pod pod-74d574ce-e7fc-4de4-ad3a-b3f95cc2af91 to disappear
Feb  9 14:27:21.600: INFO: Pod pod-74d574ce-e7fc-4de4-ad3a-b3f95cc2af91 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 14:27:21.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4759" for this suite.
Feb  9 14:27:27.636: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 14:27:27.800: INFO: namespace emptydir-4759 deletion completed in 6.194223701s

• [SLOW TEST:16.511 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 14:27:27.800: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Feb  9 14:27:37.955: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-83087836-e080-4538-9925-3879cd3f3030,GenerateName:,Namespace:events-127,SelfLink:/api/v1/namespaces/events-127/pods/send-events-83087836-e080-4538-9925-3879cd3f3030,UID:b428302c-3620-4d53-858f-d5842607f6f3,ResourceVersion:23706908,Generation:0,CreationTimestamp:2020-02-09 14:27:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 913234124,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-pdcpl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pdcpl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-pdcpl true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001f09ea0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001f09ec0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:27:28 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:27:36 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:27:36 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:27:27 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-02-09 14:27:28 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-02-09 14:27:35 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://71f23d2603a0a832fcd977e1eef46e7364897d1e3c4cb3edd3f9694826697c28}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Feb  9 14:27:39.969: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Feb  9 14:27:41.977: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 14:27:41.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-127" for this suite.
Feb  9 14:28:20.099: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 14:28:20.229: INFO: namespace events-127 deletion completed in 38.177590985s

• [SLOW TEST:52.429 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 14:28:20.231: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-63dc4d0e-a44a-43d6-8abb-bbb9243a5426
STEP: Creating a pod to test consume configMaps
Feb  9 14:28:20.378: INFO: Waiting up to 5m0s for pod "pod-configmaps-0cd1277d-6b73-4ea1-a89c-862eb47fcfa1" in namespace "configmap-3573" to be "success or failure"
Feb  9 14:28:20.388: INFO: Pod "pod-configmaps-0cd1277d-6b73-4ea1-a89c-862eb47fcfa1": Phase="Pending", Reason="", readiness=false. Elapsed: 9.499531ms
Feb  9 14:28:22.400: INFO: Pod "pod-configmaps-0cd1277d-6b73-4ea1-a89c-862eb47fcfa1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020885865s
Feb  9 14:28:24.406: INFO: Pod "pod-configmaps-0cd1277d-6b73-4ea1-a89c-862eb47fcfa1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026800919s
Feb  9 14:28:26.413: INFO: Pod "pod-configmaps-0cd1277d-6b73-4ea1-a89c-862eb47fcfa1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.034398465s
Feb  9 14:28:28.425: INFO: Pod "pod-configmaps-0cd1277d-6b73-4ea1-a89c-862eb47fcfa1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.046547163s
Feb  9 14:28:30.435: INFO: Pod "pod-configmaps-0cd1277d-6b73-4ea1-a89c-862eb47fcfa1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.056560341s
STEP: Saw pod success
Feb  9 14:28:30.436: INFO: Pod "pod-configmaps-0cd1277d-6b73-4ea1-a89c-862eb47fcfa1" satisfied condition "success or failure"
Feb  9 14:28:30.440: INFO: Trying to get logs from node iruya-node pod pod-configmaps-0cd1277d-6b73-4ea1-a89c-862eb47fcfa1 container configmap-volume-test: 
STEP: delete the pod
Feb  9 14:28:30.526: INFO: Waiting for pod pod-configmaps-0cd1277d-6b73-4ea1-a89c-862eb47fcfa1 to disappear
Feb  9 14:28:30.544: INFO: Pod pod-configmaps-0cd1277d-6b73-4ea1-a89c-862eb47fcfa1 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 14:28:30.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3573" for this suite.
Feb  9 14:28:36.593: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 14:28:36.719: INFO: namespace configmap-3573 deletion completed in 6.168501695s

• [SLOW TEST:16.489 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 14:28:36.720: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  9 14:28:36.806: INFO: Pod name rollover-pod: Found 0 pods out of 1
Feb  9 14:28:41.817: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb  9 14:28:45.872: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Feb  9 14:28:47.880: INFO: Creating deployment "test-rollover-deployment"
Feb  9 14:28:47.934: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Feb  9 14:28:49.952: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Feb  9 14:28:49.959: INFO: Ensure that both replica sets have 1 created replica
Feb  9 14:28:49.964: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Feb  9 14:28:49.970: INFO: Updating deployment test-rollover-deployment
Feb  9 14:28:49.970: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Feb  9 14:28:52.922: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Feb  9 14:28:52.931: INFO: Make sure deployment "test-rollover-deployment" is complete
Feb  9 14:28:52.936: INFO: all replica sets need to contain the pod-template-hash label
Feb  9 14:28:52.936: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716855327, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716855327, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716855331, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716855327, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  9 14:28:54.953: INFO: all replica sets need to contain the pod-template-hash label
Feb  9 14:28:54.954: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716855327, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716855327, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716855331, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716855327, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  9 14:28:56.985: INFO: all replica sets need to contain the pod-template-hash label
Feb  9 14:28:56.986: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716855327, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716855327, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716855331, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716855327, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  9 14:28:58.951: INFO: all replica sets need to contain the pod-template-hash label
Feb  9 14:28:58.952: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716855327, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716855327, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716855338, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716855327, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  9 14:29:00.986: INFO: all replica sets need to contain the pod-template-hash label
Feb  9 14:29:00.986: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716855327, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716855327, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716855338, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716855327, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  9 14:29:02.954: INFO: all replica sets need to contain the pod-template-hash label
Feb  9 14:29:02.954: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716855327, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716855327, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716855338, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716855327, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  9 14:29:04.980: INFO: all replica sets need to contain the pod-template-hash label
Feb  9 14:29:04.980: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716855327, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716855327, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716855338, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716855327, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  9 14:29:06.946: INFO: all replica sets need to contain the pod-template-hash label
Feb  9 14:29:06.947: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716855327, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716855327, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63716855338, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63716855327, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb  9 14:29:08.957: INFO: 
Feb  9 14:29:08.957: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Feb  9 14:29:08.972: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-6766,SelfLink:/apis/apps/v1/namespaces/deployment-6766/deployments/test-rollover-deployment,UID:4b133293-4c7d-4315-9f6c-bb93b083b3b7,ResourceVersion:23707140,Generation:2,CreationTimestamp:2020-02-09 14:28:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-02-09 14:28:47 +0000 UTC 2020-02-09 14:28:47 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-02-09 14:29:08 +0000 UTC 2020-02-09 14:28:47 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Feb  9 14:29:08.978: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-6766,SelfLink:/apis/apps/v1/namespaces/deployment-6766/replicasets/test-rollover-deployment-854595fc44,UID:40566c55-763e-478e-81e4-7e1dc6c98d8b,ResourceVersion:23707129,Generation:2,CreationTimestamp:2020-02-09 14:28:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 4b133293-4c7d-4315-9f6c-bb93b083b3b7 0xc0010ab8b7 0xc0010ab8b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Feb  9 14:29:08.978: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Feb  9 14:29:08.978: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-6766,SelfLink:/apis/apps/v1/namespaces/deployment-6766/replicasets/test-rollover-controller,UID:88948df4-6141-423e-b7b4-1b0d98d0dd2d,ResourceVersion:23707138,Generation:2,CreationTimestamp:2020-02-09 14:28:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 4b133293-4c7d-4315-9f6c-bb93b083b3b7 0xc0010ab7e7 0xc0010ab7e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb  9 14:29:08.978: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-6766,SelfLink:/apis/apps/v1/namespaces/deployment-6766/replicasets/test-rollover-deployment-9b8b997cf,UID:b425482e-11cb-4ae4-9bf4-0ccdf1362e24,ResourceVersion:23707098,Generation:2,CreationTimestamp:2020-02-09 14:28:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 4b133293-4c7d-4315-9f6c-bb93b083b3b7 0xc0010ab980 0xc0010ab981}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb  9 14:29:08.991: INFO: Pod "test-rollover-deployment-854595fc44-5vzzm" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-5vzzm,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-6766,SelfLink:/api/v1/namespaces/deployment-6766/pods/test-rollover-deployment-854595fc44-5vzzm,UID:3be25172-b359-4cdd-9819-ed72d617092f,ResourceVersion:23707112,Generation:0,CreationTimestamp:2020-02-09 14:28:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 40566c55-763e-478e-81e4-7e1dc6c98d8b 0xc002954967 0xc002954968}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-mhbf8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mhbf8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-mhbf8 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0029549d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0029549f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:28:51 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:28:58 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:28:58 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:28:51 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.4,StartTime:2020-02-09 14:28:51 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-02-09 14:28:58 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://3281fac08ed0b32f9ffd67d0f08fa957e565ef7583bc01258a6b159c10df5352}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 14:29:08.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-6766" for this suite.
Feb  9 14:29:15.076: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 14:29:16.033: INFO: namespace deployment-6766 deletion completed in 7.029297352s

• [SLOW TEST:39.313 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 14:29:16.033: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Feb  9 14:29:16.090: INFO: Waiting up to 5m0s for pod "downward-api-207d8a9f-5603-4d60-9247-d362b2023a5c" in namespace "downward-api-3379" to be "success or failure"
Feb  9 14:29:16.115: INFO: Pod "downward-api-207d8a9f-5603-4d60-9247-d362b2023a5c": Phase="Pending", Reason="", readiness=false. Elapsed: 24.385527ms
Feb  9 14:29:18.372: INFO: Pod "downward-api-207d8a9f-5603-4d60-9247-d362b2023a5c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.281244955s
Feb  9 14:29:20.380: INFO: Pod "downward-api-207d8a9f-5603-4d60-9247-d362b2023a5c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.28903368s
Feb  9 14:29:22.389: INFO: Pod "downward-api-207d8a9f-5603-4d60-9247-d362b2023a5c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.298195126s
Feb  9 14:29:24.431: INFO: Pod "downward-api-207d8a9f-5603-4d60-9247-d362b2023a5c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.340231828s
Feb  9 14:29:26.442: INFO: Pod "downward-api-207d8a9f-5603-4d60-9247-d362b2023a5c": Phase="Running", Reason="", readiness=true. Elapsed: 10.351198424s
Feb  9 14:29:28.457: INFO: Pod "downward-api-207d8a9f-5603-4d60-9247-d362b2023a5c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.366704242s
STEP: Saw pod success
Feb  9 14:29:28.457: INFO: Pod "downward-api-207d8a9f-5603-4d60-9247-d362b2023a5c" satisfied condition "success or failure"
Feb  9 14:29:28.464: INFO: Trying to get logs from node iruya-node pod downward-api-207d8a9f-5603-4d60-9247-d362b2023a5c container dapi-container: 
STEP: delete the pod
Feb  9 14:29:28.567: INFO: Waiting for pod downward-api-207d8a9f-5603-4d60-9247-d362b2023a5c to disappear
Feb  9 14:29:28.646: INFO: Pod downward-api-207d8a9f-5603-4d60-9247-d362b2023a5c no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 14:29:28.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3379" for this suite.
Feb  9 14:29:34.710: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 14:29:34.899: INFO: namespace downward-api-3379 deletion completed in 6.244148558s

• [SLOW TEST:18.865 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 14:29:34.900: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Feb  9 14:29:34.970: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-4317'
Feb  9 14:29:36.790: INFO: stderr: ""
Feb  9 14:29:36.790: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690
Feb  9 14:29:36.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-4317'
Feb  9 14:29:41.767: INFO: stderr: ""
Feb  9 14:29:41.767: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 14:29:41.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4317" for this suite.
Feb  9 14:29:47.832: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 14:29:47.955: INFO: namespace kubectl-4317 deletion completed in 6.174062859s

• [SLOW TEST:13.055 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 14:29:47.957: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-d3af81e7-bce3-4972-9f33-0cb7cdbc0f0b
STEP: Creating a pod to test consume secrets
Feb  9 14:29:48.171: INFO: Waiting up to 5m0s for pod "pod-secrets-155189ca-b1cc-4443-ba10-271d4ddb7c6c" in namespace "secrets-3659" to be "success or failure"
Feb  9 14:29:48.189: INFO: Pod "pod-secrets-155189ca-b1cc-4443-ba10-271d4ddb7c6c": Phase="Pending", Reason="", readiness=false. Elapsed: 17.833241ms
Feb  9 14:29:50.197: INFO: Pod "pod-secrets-155189ca-b1cc-4443-ba10-271d4ddb7c6c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026597276s
Feb  9 14:29:52.203: INFO: Pod "pod-secrets-155189ca-b1cc-4443-ba10-271d4ddb7c6c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0328109s
Feb  9 14:29:54.217: INFO: Pod "pod-secrets-155189ca-b1cc-4443-ba10-271d4ddb7c6c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045970966s
Feb  9 14:29:56.241: INFO: Pod "pod-secrets-155189ca-b1cc-4443-ba10-271d4ddb7c6c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.070356562s
Feb  9 14:29:58.251: INFO: Pod "pod-secrets-155189ca-b1cc-4443-ba10-271d4ddb7c6c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.080155018s
STEP: Saw pod success
Feb  9 14:29:58.251: INFO: Pod "pod-secrets-155189ca-b1cc-4443-ba10-271d4ddb7c6c" satisfied condition "success or failure"
Feb  9 14:29:58.254: INFO: Trying to get logs from node iruya-node pod pod-secrets-155189ca-b1cc-4443-ba10-271d4ddb7c6c container secret-volume-test: 
STEP: delete the pod
Feb  9 14:29:58.306: INFO: Waiting for pod pod-secrets-155189ca-b1cc-4443-ba10-271d4ddb7c6c to disappear
Feb  9 14:29:58.320: INFO: Pod pod-secrets-155189ca-b1cc-4443-ba10-271d4ddb7c6c no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 14:29:58.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3659" for this suite.
Feb  9 14:30:04.384: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 14:30:04.451: INFO: namespace secrets-3659 deletion completed in 6.087428295s

• [SLOW TEST:16.494 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 14:30:04.451: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-9487
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-9487
STEP: Creating statefulset with conflicting port in namespace statefulset-9487
STEP: Waiting until pod test-pod will start running in namespace statefulset-9487
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-9487
Feb  9 14:30:16.669: INFO: Observed stateful pod in namespace: statefulset-9487, name: ss-0, uid: 41e93935-2bff-426e-9a93-e979783ef1b2, status phase: Pending. Waiting for statefulset controller to delete.
Feb  9 14:35:16.669: INFO: Pod ss-0 expected to be re-created at least once
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Feb  9 14:35:16.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe po ss-0 --namespace=statefulset-9487'
Feb  9 14:35:16.822: INFO: stderr: ""
Feb  9 14:35:16.822: INFO: stdout: "Name:           ss-0\nNamespace:      statefulset-9487\nPriority:       0\nNode:           iruya-node/\nLabels:         baz=blah\n                controller-revision-hash=ss-6f98bdb9c4\n                foo=bar\n                statefulset.kubernetes.io/pod-name=ss-0\nAnnotations:    \nStatus:         Pending\nIP:             \nControlled By:  StatefulSet/ss\nContainers:\n  nginx:\n    Image:        docker.io/library/nginx:1.14-alpine\n    Port:         21017/TCP\n    Host Port:    21017/TCP\n    Environment:  \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-8xgwc (ro)\nVolumes:\n  default-token-8xgwc:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-8xgwc\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type     Reason            Age   From                 Message\n  ----     ------            ----  ----                 -------\n  Warning  PodFitsHostPorts  5m8s  kubelet, iruya-node  Predicate PodFitsHostPorts failed\n"
Feb  9 14:35:16.822: INFO: 
Output of kubectl describe ss-0:
Name:           ss-0
Namespace:      statefulset-9487
Priority:       0
Node:           iruya-node/
Labels:         baz=blah
                controller-revision-hash=ss-6f98bdb9c4
                foo=bar
                statefulset.kubernetes.io/pod-name=ss-0
Annotations:    
Status:         Pending
IP:             
Controlled By:  StatefulSet/ss
Containers:
  nginx:
    Image:        docker.io/library/nginx:1.14-alpine
    Port:         21017/TCP
    Host Port:    21017/TCP
    Environment:  
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-8xgwc (ro)
Volumes:
  default-token-8xgwc:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-8xgwc
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age   From                 Message
  ----     ------            ----  ----                 -------
  Warning  PodFitsHostPorts  5m8s  kubelet, iruya-node  Predicate PodFitsHostPorts failed

Feb  9 14:35:16.823: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs ss-0 --namespace=statefulset-9487 --tail=100'
Feb  9 14:35:17.012: INFO: rc: 1
Feb  9 14:35:17.013: INFO: 
Last 100 log lines of ss-0:

Feb  9 14:35:17.013: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe po test-pod --namespace=statefulset-9487'
Feb  9 14:35:17.153: INFO: stderr: ""
Feb  9 14:35:17.153: INFO: stdout: "Name:         test-pod\nNamespace:    statefulset-9487\nPriority:     0\nNode:         iruya-node/10.96.3.65\nStart Time:   Sun, 09 Feb 2020 14:30:04 +0000\nLabels:       \nAnnotations:  \nStatus:       Running\nIP:           10.44.0.1\nContainers:\n  nginx:\n    Container ID:   docker://e5ee326408152dcd0227a583457fd9c91aec2d7fd9f51b7a89a773c353f73a4c\n    Image:          docker.io/library/nginx:1.14-alpine\n    Image ID:       docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\n    Port:           21017/TCP\n    Host Port:      21017/TCP\n    State:          Running\n      Started:      Sun, 09 Feb 2020 14:30:14 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-8xgwc (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-8xgwc:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-8xgwc\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason   Age   From                 Message\n  ----    ------   ----  ----                 -------\n  Normal  Pulled   5m7s  kubelet, iruya-node  Container image \"docker.io/library/nginx:1.14-alpine\" already present on machine\n  Normal  Created  5m4s  kubelet, iruya-node  Created container nginx\n  Normal  Started  5m3s  kubelet, iruya-node  Started container nginx\n"
Feb  9 14:35:17.153: INFO: 
Output of kubectl describe test-pod:
Name:         test-pod
Namespace:    statefulset-9487
Priority:     0
Node:         iruya-node/10.96.3.65
Start Time:   Sun, 09 Feb 2020 14:30:04 +0000
Labels:       
Annotations:  
Status:       Running
IP:           10.44.0.1
Containers:
  nginx:
    Container ID:   docker://e5ee326408152dcd0227a583457fd9c91aec2d7fd9f51b7a89a773c353f73a4c
    Image:          docker.io/library/nginx:1.14-alpine
    Image ID:       docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7
    Port:           21017/TCP
    Host Port:      21017/TCP
    State:          Running
      Started:      Sun, 09 Feb 2020 14:30:14 +0000
    Ready:          True
    Restart Count:  0
    Environment:    
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-8xgwc (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  default-token-8xgwc:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-8xgwc
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason   Age   From                 Message
  ----    ------   ----  ----                 -------
  Normal  Pulled   5m7s  kubelet, iruya-node  Container image "docker.io/library/nginx:1.14-alpine" already present on machine
  Normal  Created  5m4s  kubelet, iruya-node  Created container nginx
  Normal  Started  5m3s  kubelet, iruya-node  Started container nginx

Feb  9 14:35:17.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs test-pod --namespace=statefulset-9487 --tail=100'
Feb  9 14:35:17.296: INFO: stderr: ""
Feb  9 14:35:17.296: INFO: stdout: ""
Feb  9 14:35:17.296: INFO: 
Last 100 log lines of test-pod:

Feb  9 14:35:17.296: INFO: Deleting all statefulset in ns statefulset-9487
Feb  9 14:35:17.304: INFO: Scaling statefulset ss to 0
Feb  9 14:35:27.353: INFO: Waiting for statefulset status.replicas updated to 0
Feb  9 14:35:27.358: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Collecting events from namespace "statefulset-9487".
STEP: Found 15 events.
Feb  9 14:35:27.406: INFO: At 2020-02-09 14:30:04 +0000 UTC - event for ss: {statefulset-controller } RecreatingFailedPod: StatefulSet statefulset-9487/ss is recreating failed Pod ss-0
Feb  9 14:35:27.406: INFO: At 2020-02-09 14:30:04 +0000 UTC - event for ss: {statefulset-controller } SuccessfulCreate: create Pod ss-0 in StatefulSet ss successful
Feb  9 14:35:27.406: INFO: At 2020-02-09 14:30:04 +0000 UTC - event for ss: {statefulset-controller } SuccessfulDelete: delete Pod ss-0 in StatefulSet ss successful
Feb  9 14:35:27.406: INFO: At 2020-02-09 14:30:04 +0000 UTC - event for ss-0: {kubelet iruya-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Feb  9 14:35:27.406: INFO: At 2020-02-09 14:30:05 +0000 UTC - event for ss: {statefulset-controller } FailedCreate: create Pod ss-0 in StatefulSet ss failed error: The POST operation against Pod could not be completed at this time, please try again.
Feb  9 14:35:27.406: INFO: At 2020-02-09 14:30:05 +0000 UTC - event for ss-0: {kubelet iruya-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Feb  9 14:35:27.406: INFO: At 2020-02-09 14:30:05 +0000 UTC - event for ss-0: {kubelet iruya-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Feb  9 14:35:27.406: INFO: At 2020-02-09 14:30:06 +0000 UTC - event for ss-0: {kubelet iruya-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Feb  9 14:35:27.407: INFO: At 2020-02-09 14:30:07 +0000 UTC - event for ss-0: {kubelet iruya-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Feb  9 14:35:27.407: INFO: At 2020-02-09 14:30:08 +0000 UTC - event for ss-0: {kubelet iruya-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Feb  9 14:35:27.407: INFO: At 2020-02-09 14:30:08 +0000 UTC - event for ss-0: {kubelet iruya-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Feb  9 14:35:27.407: INFO: At 2020-02-09 14:30:08 +0000 UTC - event for ss-0: {kubelet iruya-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Feb  9 14:35:27.407: INFO: At 2020-02-09 14:30:10 +0000 UTC - event for test-pod: {kubelet iruya-node} Pulled: Container image "docker.io/library/nginx:1.14-alpine" already present on machine
Feb  9 14:35:27.407: INFO: At 2020-02-09 14:30:13 +0000 UTC - event for test-pod: {kubelet iruya-node} Created: Created container nginx
Feb  9 14:35:27.407: INFO: At 2020-02-09 14:30:14 +0000 UTC - event for test-pod: {kubelet iruya-node} Started: Started container nginx
Feb  9 14:35:27.414: INFO: POD       NODE        PHASE    GRACE  CONDITIONS
Feb  9 14:35:27.414: INFO: test-pod  iruya-node  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:30:04 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:30:15 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:30:15 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:30:04 +0000 UTC  }]
Feb  9 14:35:27.414: INFO: 
Feb  9 14:35:27.429: INFO: 
Logging node info for node iruya-node
Feb  9 14:35:27.433: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:iruya-node,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/iruya-node,UID:b2aa273d-23ea-4c86-9e2f-72569e3392bd,ResourceVersion:23707792,Generation:0,CreationTimestamp:2019-08-04 09:01:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/arch: amd64,kubernetes.io/hostname: iruya-node,kubernetes.io/os: linux,},Annotations:map[string]string{kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock,node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:10.96.1.0/24,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{20629221376 0} {} 20145724Ki BinarySI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4136013824 0} {} 4039076Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{18566299208 0} {} 18566299208 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4031156224 0} {} 3936676Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{NetworkUnavailable False 2019-10-12 11:56:49 +0000 UTC 2019-10-12 11:56:49 +0000 UTC WeaveIsUp Weave pod has set this} {MemoryPressure False 2020-02-09 14:35:06 +0000 UTC 2019-08-04 09:01:39 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2020-02-09 14:35:06 +0000 UTC 2019-08-04 09:01:39 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2020-02-09 14:35:06 +0000 UTC 2019-08-04 09:01:39 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2020-02-09 14:35:06 +0000 UTC 2019-08-04 09:02:19 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{InternalIP 10.96.3.65} {Hostname iruya-node}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f573dcf04d6f4a87856a35d266a2fa7a,SystemUUID:F573DCF0-4D6F-4A87-856A-35D266A2FA7A,BootID:8baf4beb-8391-43e6-b17b-b1e184b5370a,KernelVersion:4.15.0-52-generic,OSImage:Ubuntu 18.04.2 LTS,ContainerRuntimeVersion:docker://18.9.7,KubeletVersion:v1.15.1,KubeProxyVersion:v1.15.1,OperatingSystem:linux,Architecture:amd64,},Images:[{[gcr.io/google-samples/gb-frontend@sha256:35cb427341429fac3df10ff74600ea73e8ec0754d78f9ce89e0b4f3d70d53ba6 gcr.io/google-samples/gb-frontend:v6] 373099368} {[k8s.gcr.io/etcd@sha256:17da501f5d2a675be46040422a27b7cc21b8a43895ac998b171db1c346f361f7 k8s.gcr.io/etcd:3.3.10] 258116302} {[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15] 246640776} {[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0] 195659796} {[weaveworks/weave-kube@sha256:8fea236b8e64192c454e459b40381bd48795bd54d791fa684d818afdc12bd100 weaveworks/weave-kube:2.5.2] 148150868} {[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine] 126894770} {[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine] 123781643} {[gcr.io/google-samples/gb-redisslave@sha256:57730a481f97b3321138161ba2c8c9ca3b32df32ce9180e4029e6940446800ec gcr.io/google-samples/gb-redisslave:v3] 98945667} {[k8s.gcr.io/kube-proxy@sha256:08186f4897488e96cb098dd8d1d931af9a6ea718bb8737bf44bb76e42075f0ce k8s.gcr.io/kube-proxy:v1.15.1] 82408284} {[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:1bafcc6fb1aa990b487850adba9cadc020e42d7905aa8a30481182a477ba24b0 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.10] 61365829} {[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6] 57345321} {[weaveworks/weave-npc@sha256:56c93a359d54107558720a2859b83cb28a31c70c82a1aaa3dc4704e6c62e3b15 weaveworks/weave-npc:2.5.2] 49569458} {[redis@sha256:50899ea1ceed33fa03232f3ac57578a424faa1742c1ac9c7a7bdb95cdf19b858 redis:5.0.5-alpine] 29331594} {[gcr.io/kubernetes-e2e-test-images/nettest@sha256:6aa91bc71993260a87513e31b672ec14ce84bc253cd5233406c6946d3a8f55a1 gcr.io/kubernetes-e2e-test-images/nettest:1.0] 27413498} {[nginx@sha256:57a226fb6ab6823027c0704a9346a890ffb0cacde06bc19bbc234c8720673555 nginx:1.15-alpine] 16087791} {[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine] 16032814} {[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0] 11443478} {[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1] 9349974} {[gcr.io/kubernetes-e2e-test-images/hostexec@sha256:90dfe59da029f9e536385037bc64e86cd3d6e55bae613ddbe69e554d79b0639d gcr.io/kubernetes-e2e-test-images/hostexec:1.1] 8490662} {[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0] 6757579} {[gcr.io/kubernetes-e2e-test-images/netexec@sha256:203f0e11dde4baf4b08e27de094890eb3447d807c8b3e990b764b799d3a9e8b7 gcr.io/kubernetes-e2e-test-images/netexec:1.1] 6705349} {[gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 gcr.io/kubernetes-e2e-test-images/redis:1.0] 5905732} {[gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1] 5851985} {[gcr.io/kubernetes-e2e-test-images/liveness@sha256:71c3fc838e0637df570497febafa0ee73bf47176dfd43612de5c55a71230674e gcr.io/kubernetes-e2e-test-images/liveness:1.1] 5829944} {[appropriate/curl@sha256:c8bf5bbec6397465a247c2bb3e589bb77e4f62ff88a027175ecb2d9e4f12c9d7 appropriate/curl:latest] 5496756} {[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0] 4753501} {[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0] 4747037} {[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0] 4732240} {[gcr.io/kubernetes-e2e-test-images/porter@sha256:d6389405e453950618ae7749d9eee388f0eb32e0328a7e6583c41433aa5f2a77 gcr.io/kubernetes-e2e-test-images/porter:1.0] 4681408} {[gcr.io/kubernetes-e2e-test-images/entrypoint-tester@sha256:ba4681b5299884a3adca70fbde40638373b437a881055ffcd0935b5f43eb15c9 gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0] 2729534} {[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0] 1563521} {[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0] 1450451} {[busybox@sha256:fe301db49df08c384001ed752dff6d52b4305a73a7f608f21528048e8a08b51e busybox:latest] 1219782} {[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29] 1154361} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472} {[kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 kubernetes/pause:latest] 239840}],VolumesInUse:[],VolumesAttached:[],Config:nil,},}
Feb  9 14:35:27.434: INFO: 
Logging kubelet events for node iruya-node
Feb  9 14:35:27.437: INFO: 
Logging pods the kubelet thinks is on node iruya-node
Feb  9 14:35:27.455: INFO: kube-proxy-976zl started at 2019-08-04 09:01:39 +0000 UTC (0+1 container statuses recorded)
Feb  9 14:35:27.455: INFO: 	Container kube-proxy ready: true, restart count 0
Feb  9 14:35:27.455: INFO: test-pod started at 2020-02-09 14:30:04 +0000 UTC (0+1 container statuses recorded)
Feb  9 14:35:27.455: INFO: 	Container nginx ready: true, restart count 0
Feb  9 14:35:27.455: INFO: weave-net-rlp57 started at 2019-10-12 11:56:39 +0000 UTC (0+2 container statuses recorded)
Feb  9 14:35:27.455: INFO: 	Container weave ready: true, restart count 0
Feb  9 14:35:27.455: INFO: 	Container weave-npc ready: true, restart count 0
W0209 14:35:27.485252       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb  9 14:35:27.547: INFO: 
Latency metrics for node iruya-node
Feb  9 14:35:27.547: INFO: 
Logging node info for node iruya-server-sfge57q7djm7
Feb  9 14:35:27.552: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:iruya-server-sfge57q7djm7,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/iruya-server-sfge57q7djm7,UID:67f2a658-4743-4118-95e7-463a23bcd212,ResourceVersion:23707766,Generation:0,CreationTimestamp:2019-08-04 08:52:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/arch: amd64,kubernetes.io/hostname: iruya-server-sfge57q7djm7,kubernetes.io/os: linux,node-role.kubernetes.io/master: ,},Annotations:map[string]string{kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock,node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:10.96.0.0/24,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{20629221376 0} {} 20145724Ki BinarySI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4136013824 0} {} 4039076Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{18566299208 0} {} 18566299208 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4031156224 0} {} 3936676Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{NetworkUnavailable False 2019-08-04 08:53:00 +0000 UTC 2019-08-04 08:53:00 +0000 UTC WeaveIsUp Weave pod has set this} {MemoryPressure False 2020-02-09 14:34:47 +0000 UTC 2019-08-04 08:52:04 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2020-02-09 14:34:47 +0000 UTC 2019-08-04 08:52:04 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2020-02-09 14:34:47 +0000 UTC 2019-08-04 08:52:04 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2020-02-09 14:34:47 +0000 UTC 2019-08-04 08:53:09 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{InternalIP 10.96.2.216} {Hostname iruya-server-sfge57q7djm7}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:78bacef342604a51913cae58dd95802b,SystemUUID:78BACEF3-4260-4A51-913C-AE58DD95802B,BootID:db143d3a-01b3-4483-b23e-e72adff2b28d,KernelVersion:4.15.0-52-generic,OSImage:Ubuntu 18.04.2 LTS,ContainerRuntimeVersion:docker://18.9.7,KubeletVersion:v1.15.1,KubeProxyVersion:v1.15.1,OperatingSystem:linux,Architecture:amd64,},Images:[{[gcr.io/google-samples/gb-frontend@sha256:35cb427341429fac3df10ff74600ea73e8ec0754d78f9ce89e0b4f3d70d53ba6 gcr.io/google-samples/gb-frontend:v6] 373099368} {[k8s.gcr.io/etcd@sha256:17da501f5d2a675be46040422a27b7cc21b8a43895ac998b171db1c346f361f7 k8s.gcr.io/etcd:3.3.10] 258116302} {[k8s.gcr.io/kube-apiserver@sha256:304a1c38707834062ee87df62ef329d52a8b9a3e70459565d0a396479073f54c k8s.gcr.io/kube-apiserver:v1.15.1] 206827454} {[k8s.gcr.io/kube-controller-manager@sha256:9abae95e428e228fe8f6d1630d55e79e018037460f3731312805c0f37471e4bf k8s.gcr.io/kube-controller-manager:v1.15.1] 158722622} {[weaveworks/weave-kube@sha256:8fea236b8e64192c454e459b40381bd48795bd54d791fa684d818afdc12bd100 weaveworks/weave-kube:2.5.2] 148150868} {[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine] 126894770} {[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine] 123781643} {[gcr.io/google-samples/gb-redisslave@sha256:57730a481f97b3321138161ba2c8c9ca3b32df32ce9180e4029e6940446800ec gcr.io/google-samples/gb-redisslave:v3] 98945667} {[k8s.gcr.io/kube-proxy@sha256:08186f4897488e96cb098dd8d1d931af9a6ea718bb8737bf44bb76e42075f0ce k8s.gcr.io/kube-proxy:v1.15.1] 82408284} {[k8s.gcr.io/kube-scheduler@sha256:d0ee18a9593013fbc44b1920e4930f29b664b59a3958749763cb33b57e0e8956 k8s.gcr.io/kube-scheduler:v1.15.1] 81107582} {[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6] 57345321} {[weaveworks/weave-npc@sha256:56c93a359d54107558720a2859b83cb28a31c70c82a1aaa3dc4704e6c62e3b15 weaveworks/weave-npc:2.5.2] 49569458} {[k8s.gcr.io/coredns@sha256:02382353821b12c21b062c59184e227e001079bb13ebd01f9d3270ba0fcbf1e4 k8s.gcr.io/coredns:1.3.1] 40303560} {[redis@sha256:50899ea1ceed33fa03232f3ac57578a424faa1742c1ac9c7a7bdb95cdf19b858 redis:5.0.5-alpine] 29331594} {[nginx@sha256:57a226fb6ab6823027c0704a9346a890ffb0cacde06bc19bbc234c8720673555 nginx:1.15-alpine] 16087791} {[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine] 16032814} {[gcr.io/kubernetes-e2e-test-images/netexec@sha256:203f0e11dde4baf4b08e27de094890eb3447d807c8b3e990b764b799d3a9e8b7 gcr.io/kubernetes-e2e-test-images/netexec:1.1] 6705349} {[gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 gcr.io/kubernetes-e2e-test-images/redis:1.0] 5905732} {[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0] 4753501} {[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0] 4747037} {[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0] 1563521} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472} {[kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 kubernetes/pause:latest] 239840}],VolumesInUse:[],VolumesAttached:[],Config:nil,},}
Feb  9 14:35:27.552: INFO: 
Logging kubelet events for node iruya-server-sfge57q7djm7
Feb  9 14:35:27.556: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7
Feb  9 14:35:27.569: INFO: kube-apiserver-iruya-server-sfge57q7djm7 started at 2019-08-04 08:51:39 +0000 UTC (0+1 container statuses recorded)
Feb  9 14:35:27.569: INFO: 	Container kube-apiserver ready: true, restart count 0
Feb  9 14:35:27.569: INFO: kube-scheduler-iruya-server-sfge57q7djm7 started at 2019-08-04 08:51:43 +0000 UTC (0+1 container statuses recorded)
Feb  9 14:35:27.569: INFO: 	Container kube-scheduler ready: true, restart count 13
Feb  9 14:35:27.569: INFO: coredns-5c98db65d4-xx8w8 started at 2019-08-04 08:53:12 +0000 UTC (0+1 container statuses recorded)
Feb  9 14:35:27.569: INFO: 	Container coredns ready: true, restart count 0
Feb  9 14:35:27.569: INFO: etcd-iruya-server-sfge57q7djm7 started at 2019-08-04 08:51:38 +0000 UTC (0+1 container statuses recorded)
Feb  9 14:35:27.569: INFO: 	Container etcd ready: true, restart count 0
Feb  9 14:35:27.569: INFO: weave-net-bzl4d started at 2019-08-04 08:52:37 +0000 UTC (0+2 container statuses recorded)
Feb  9 14:35:27.569: INFO: 	Container weave ready: true, restart count 0
Feb  9 14:35:27.569: INFO: 	Container weave-npc ready: true, restart count 0
Feb  9 14:35:27.569: INFO: coredns-5c98db65d4-bm4gs started at 2019-08-04 08:53:12 +0000 UTC (0+1 container statuses recorded)
Feb  9 14:35:27.569: INFO: 	Container coredns ready: true, restart count 0
Feb  9 14:35:27.569: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 started at 2019-08-04 08:51:42 +0000 UTC (0+1 container statuses recorded)
Feb  9 14:35:27.569: INFO: 	Container kube-controller-manager ready: true, restart count 20
Feb  9 14:35:27.569: INFO: kube-proxy-58v95 started at 2019-08-04 08:52:37 +0000 UTC (0+1 container statuses recorded)
Feb  9 14:35:27.569: INFO: 	Container kube-proxy ready: true, restart count 0
W0209 14:35:27.573977       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb  9 14:35:27.612: INFO: 
Latency metrics for node iruya-server-sfge57q7djm7
Feb  9 14:35:27.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-9487" for this suite.
Feb  9 14:35:49.652: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 14:35:49.811: INFO: namespace statefulset-9487 deletion completed in 22.192466226s

• Failure [345.359 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Should recreate evicted statefulset [Conformance] [It]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697

    Feb  9 14:35:16.669: Pod ss-0 expected to be re-created at least once

    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:769
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 14:35:49.811: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-681478a8-d7ca-47cf-a5c1-c50ce73aa73f
STEP: Creating a pod to test consume secrets
Feb  9 14:35:49.954: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d93c657c-8225-4fd8-8266-148fbd61c914" in namespace "projected-9058" to be "success or failure"
Feb  9 14:35:49.991: INFO: Pod "pod-projected-secrets-d93c657c-8225-4fd8-8266-148fbd61c914": Phase="Pending", Reason="", readiness=false. Elapsed: 37.042449ms
Feb  9 14:35:52.003: INFO: Pod "pod-projected-secrets-d93c657c-8225-4fd8-8266-148fbd61c914": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048485645s
Feb  9 14:35:54.016: INFO: Pod "pod-projected-secrets-d93c657c-8225-4fd8-8266-148fbd61c914": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061250786s
Feb  9 14:35:56.062: INFO: Pod "pod-projected-secrets-d93c657c-8225-4fd8-8266-148fbd61c914": Phase="Pending", Reason="", readiness=false. Elapsed: 6.108093325s
Feb  9 14:35:58.073: INFO: Pod "pod-projected-secrets-d93c657c-8225-4fd8-8266-148fbd61c914": Phase="Pending", Reason="", readiness=false. Elapsed: 8.118289935s
Feb  9 14:36:00.080: INFO: Pod "pod-projected-secrets-d93c657c-8225-4fd8-8266-148fbd61c914": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.125809826s
STEP: Saw pod success
Feb  9 14:36:00.080: INFO: Pod "pod-projected-secrets-d93c657c-8225-4fd8-8266-148fbd61c914" satisfied condition "success or failure"
Feb  9 14:36:00.084: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-d93c657c-8225-4fd8-8266-148fbd61c914 container projected-secret-volume-test: 
STEP: delete the pod
Feb  9 14:36:00.139: INFO: Waiting for pod pod-projected-secrets-d93c657c-8225-4fd8-8266-148fbd61c914 to disappear
Feb  9 14:36:00.145: INFO: Pod pod-projected-secrets-d93c657c-8225-4fd8-8266-148fbd61c914 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 14:36:00.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9058" for this suite.
Feb  9 14:36:06.171: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 14:36:06.287: INFO: namespace projected-9058 deletion completed in 6.136512349s

• [SLOW TEST:16.476 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 14:36:06.288: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating cluster-info
Feb  9 14:36:06.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Feb  9 14:36:06.428: INFO: stderr: ""
Feb  9 14:36:06.429: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 14:36:06.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2513" for this suite.
Feb  9 14:36:12.461: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 14:36:12.599: INFO: namespace kubectl-2513 deletion completed in 6.164842119s

• [SLOW TEST:6.311 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 14:36:12.600: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Feb  9 14:36:12.766: INFO: Pod name pod-release: Found 0 pods out of 1
Feb  9 14:36:17.860: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 14:36:18.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-1782" for this suite.
Feb  9 14:36:24.275: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 14:36:24.362: INFO: namespace replication-controller-1782 deletion completed in 6.303223602s

• [SLOW TEST:11.763 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 14:36:24.363: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Feb  9 14:36:24.497: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb  9 14:36:24.511: INFO: Waiting for terminating namespaces to be deleted...
Feb  9 14:36:24.514: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Feb  9 14:36:24.537: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Feb  9 14:36:24.537: INFO: 	Container weave ready: true, restart count 0
Feb  9 14:36:24.537: INFO: 	Container weave-npc ready: true, restart count 0
Feb  9 14:36:24.537: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Feb  9 14:36:24.537: INFO: 	Container kube-proxy ready: true, restart count 0
Feb  9 14:36:24.537: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Feb  9 14:36:24.544: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Feb  9 14:36:24.544: INFO: 	Container kube-scheduler ready: true, restart count 13
Feb  9 14:36:24.544: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Feb  9 14:36:24.544: INFO: 	Container coredns ready: true, restart count 0
Feb  9 14:36:24.544: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Feb  9 14:36:24.544: INFO: 	Container etcd ready: true, restart count 0
Feb  9 14:36:24.544: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Feb  9 14:36:24.544: INFO: 	Container weave ready: true, restart count 0
Feb  9 14:36:24.544: INFO: 	Container weave-npc ready: true, restart count 0
Feb  9 14:36:24.544: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Feb  9 14:36:24.544: INFO: 	Container coredns ready: true, restart count 0
Feb  9 14:36:24.544: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Feb  9 14:36:24.544: INFO: 	Container kube-controller-manager ready: true, restart count 20
Feb  9 14:36:24.544: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Feb  9 14:36:24.544: INFO: 	Container kube-proxy ready: true, restart count 0
Feb  9 14:36:24.544: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Feb  9 14:36:24.544: INFO: 	Container kube-apiserver ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: verifying the node has the label node iruya-node
STEP: verifying the node has the label node iruya-server-sfge57q7djm7
Feb  9 14:36:24.721: INFO: Pod coredns-5c98db65d4-bm4gs requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Feb  9 14:36:24.721: INFO: Pod coredns-5c98db65d4-xx8w8 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Feb  9 14:36:24.721: INFO: Pod etcd-iruya-server-sfge57q7djm7 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7
Feb  9 14:36:24.721: INFO: Pod kube-apiserver-iruya-server-sfge57q7djm7 requesting resource cpu=250m on Node iruya-server-sfge57q7djm7
Feb  9 14:36:24.721: INFO: Pod kube-controller-manager-iruya-server-sfge57q7djm7 requesting resource cpu=200m on Node iruya-server-sfge57q7djm7
Feb  9 14:36:24.721: INFO: Pod kube-proxy-58v95 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7
Feb  9 14:36:24.721: INFO: Pod kube-proxy-976zl requesting resource cpu=0m on Node iruya-node
Feb  9 14:36:24.721: INFO: Pod kube-scheduler-iruya-server-sfge57q7djm7 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Feb  9 14:36:24.721: INFO: Pod weave-net-bzl4d requesting resource cpu=20m on Node iruya-server-sfge57q7djm7
Feb  9 14:36:24.721: INFO: Pod weave-net-rlp57 requesting resource cpu=20m on Node iruya-node
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-0ab9c5b0-5e31-4c67-a856-ea27ca3052bd.15f1c2ab3c43f076], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4296/filler-pod-0ab9c5b0-5e31-4c67-a856-ea27ca3052bd to iruya-server-sfge57q7djm7]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-0ab9c5b0-5e31-4c67-a856-ea27ca3052bd.15f1c2ac73fba4ef], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-0ab9c5b0-5e31-4c67-a856-ea27ca3052bd.15f1c2ad6256a077], Reason = [Created], Message = [Created container filler-pod-0ab9c5b0-5e31-4c67-a856-ea27ca3052bd]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-0ab9c5b0-5e31-4c67-a856-ea27ca3052bd.15f1c2ad89b61ea1], Reason = [Started], Message = [Started container filler-pod-0ab9c5b0-5e31-4c67-a856-ea27ca3052bd]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-60cbd8ee-9746-465c-836a-2c4f67e336de.15f1c2ab3b804eef], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4296/filler-pod-60cbd8ee-9746-465c-836a-2c4f67e336de to iruya-node]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-60cbd8ee-9746-465c-836a-2c4f67e336de.15f1c2ada9113eda], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-60cbd8ee-9746-465c-836a-2c4f67e336de.15f1c2ae87d3baad], Reason = [Created], Message = [Created container filler-pod-60cbd8ee-9746-465c-836a-2c4f67e336de]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-60cbd8ee-9746-465c-836a-2c4f67e336de.15f1c2aea82d50ba], Reason = [Started], Message = [Started container filler-pod-60cbd8ee-9746-465c-836a-2c4f67e336de]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15f1c2aef92d20ef], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.]
STEP: removing the label node off the node iruya-node
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node iruya-server-sfge57q7djm7
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 14:36:42.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-4296" for this suite.
Feb  9 14:36:50.168: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 14:36:50.323: INFO: namespace sched-pred-4296 deletion completed in 8.178013431s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:25.960 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 14:36:50.323: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test use defaults
Feb  9 14:36:51.766: INFO: Waiting up to 5m0s for pod "client-containers-53c55e1a-e334-49b4-8664-a7bb50d3c8a7" in namespace "containers-7862" to be "success or failure"
Feb  9 14:36:51.772: INFO: Pod "client-containers-53c55e1a-e334-49b4-8664-a7bb50d3c8a7": Phase="Pending", Reason="", readiness=false. Elapsed: 5.163462ms
Feb  9 14:36:53.804: INFO: Pod "client-containers-53c55e1a-e334-49b4-8664-a7bb50d3c8a7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037100589s
Feb  9 14:36:55.812: INFO: Pod "client-containers-53c55e1a-e334-49b4-8664-a7bb50d3c8a7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045908496s
Feb  9 14:36:57.871: INFO: Pod "client-containers-53c55e1a-e334-49b4-8664-a7bb50d3c8a7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.104690446s
Feb  9 14:36:59.886: INFO: Pod "client-containers-53c55e1a-e334-49b4-8664-a7bb50d3c8a7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.119689614s
Feb  9 14:37:01.898: INFO: Pod "client-containers-53c55e1a-e334-49b4-8664-a7bb50d3c8a7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.13152579s
STEP: Saw pod success
Feb  9 14:37:01.898: INFO: Pod "client-containers-53c55e1a-e334-49b4-8664-a7bb50d3c8a7" satisfied condition "success or failure"
Feb  9 14:37:01.902: INFO: Trying to get logs from node iruya-node pod client-containers-53c55e1a-e334-49b4-8664-a7bb50d3c8a7 container test-container: 
STEP: delete the pod
Feb  9 14:37:02.254: INFO: Waiting for pod client-containers-53c55e1a-e334-49b4-8664-a7bb50d3c8a7 to disappear
Feb  9 14:37:02.264: INFO: Pod client-containers-53c55e1a-e334-49b4-8664-a7bb50d3c8a7 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 14:37:02.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-7862" for this suite.
Feb  9 14:37:08.313: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 14:37:08.421: INFO: namespace containers-7862 deletion completed in 6.129078797s

• [SLOW TEST:18.098 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 14:37:08.422: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  9 14:37:08.514: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a19cdace-6bd1-4f04-9b63-ec50a8a6a3ed" in namespace "projected-9004" to be "success or failure"
Feb  9 14:37:08.599: INFO: Pod "downwardapi-volume-a19cdace-6bd1-4f04-9b63-ec50a8a6a3ed": Phase="Pending", Reason="", readiness=false. Elapsed: 84.906151ms
Feb  9 14:37:10.625: INFO: Pod "downwardapi-volume-a19cdace-6bd1-4f04-9b63-ec50a8a6a3ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.110403502s
Feb  9 14:37:12.635: INFO: Pod "downwardapi-volume-a19cdace-6bd1-4f04-9b63-ec50a8a6a3ed": Phase="Pending", Reason="", readiness=false. Elapsed: 4.120901515s
Feb  9 14:37:14.650: INFO: Pod "downwardapi-volume-a19cdace-6bd1-4f04-9b63-ec50a8a6a3ed": Phase="Pending", Reason="", readiness=false. Elapsed: 6.135329505s
Feb  9 14:37:16.670: INFO: Pod "downwardapi-volume-a19cdace-6bd1-4f04-9b63-ec50a8a6a3ed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.155913798s
STEP: Saw pod success
Feb  9 14:37:16.670: INFO: Pod "downwardapi-volume-a19cdace-6bd1-4f04-9b63-ec50a8a6a3ed" satisfied condition "success or failure"
Feb  9 14:37:16.677: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-a19cdace-6bd1-4f04-9b63-ec50a8a6a3ed container client-container: 
STEP: delete the pod
Feb  9 14:37:16.783: INFO: Waiting for pod downwardapi-volume-a19cdace-6bd1-4f04-9b63-ec50a8a6a3ed to disappear
Feb  9 14:37:16.792: INFO: Pod downwardapi-volume-a19cdace-6bd1-4f04-9b63-ec50a8a6a3ed no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 14:37:16.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9004" for this suite.
Feb  9 14:37:22.819: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 14:37:22.924: INFO: namespace projected-9004 deletion completed in 6.125054818s

• [SLOW TEST:14.502 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 14:37:22.925: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-7308
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb  9 14:37:23.039: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Feb  9 14:38:01.273: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.44.0.1:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7308 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  9 14:38:01.273: INFO: >>> kubeConfig: /root/.kube/config
I0209 14:38:01.356008       8 log.go:172] (0xc00026b340) (0xc00304e640) Create stream
I0209 14:38:01.356082       8 log.go:172] (0xc00026b340) (0xc00304e640) Stream added, broadcasting: 1
I0209 14:38:01.370045       8 log.go:172] (0xc00026b340) Reply frame received for 1
I0209 14:38:01.370180       8 log.go:172] (0xc00026b340) (0xc0010f1ae0) Create stream
I0209 14:38:01.370205       8 log.go:172] (0xc00026b340) (0xc0010f1ae0) Stream added, broadcasting: 3
I0209 14:38:01.372511       8 log.go:172] (0xc00026b340) Reply frame received for 3
I0209 14:38:01.372551       8 log.go:172] (0xc00026b340) (0xc0010f1b80) Create stream
I0209 14:38:01.372564       8 log.go:172] (0xc00026b340) (0xc0010f1b80) Stream added, broadcasting: 5
I0209 14:38:01.375606       8 log.go:172] (0xc00026b340) Reply frame received for 5
I0209 14:38:01.629277       8 log.go:172] (0xc00026b340) Data frame received for 3
I0209 14:38:01.629417       8 log.go:172] (0xc0010f1ae0) (3) Data frame handling
I0209 14:38:01.629456       8 log.go:172] (0xc0010f1ae0) (3) Data frame sent
I0209 14:38:01.813121       8 log.go:172] (0xc00026b340) Data frame received for 1
I0209 14:38:01.813271       8 log.go:172] (0xc00026b340) (0xc0010f1ae0) Stream removed, broadcasting: 3
I0209 14:38:01.813329       8 log.go:172] (0xc00304e640) (1) Data frame handling
I0209 14:38:01.813345       8 log.go:172] (0xc00304e640) (1) Data frame sent
I0209 14:38:01.813523       8 log.go:172] (0xc00026b340) (0xc0010f1b80) Stream removed, broadcasting: 5
I0209 14:38:01.813550       8 log.go:172] (0xc00026b340) (0xc00304e640) Stream removed, broadcasting: 1
I0209 14:38:01.813564       8 log.go:172] (0xc00026b340) Go away received
I0209 14:38:01.814015       8 log.go:172] (0xc00026b340) (0xc00304e640) Stream removed, broadcasting: 1
I0209 14:38:01.814038       8 log.go:172] (0xc00026b340) (0xc0010f1ae0) Stream removed, broadcasting: 3
I0209 14:38:01.814047       8 log.go:172] (0xc00026b340) (0xc0010f1b80) Stream removed, broadcasting: 5
Feb  9 14:38:01.814: INFO: Found all expected endpoints: [netserver-0]
Feb  9 14:38:01.825: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-7308 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb  9 14:38:01.825: INFO: >>> kubeConfig: /root/.kube/config
I0209 14:38:01.943614       8 log.go:172] (0xc0028b8160) (0xc00304ea00) Create stream
I0209 14:38:01.943707       8 log.go:172] (0xc0028b8160) (0xc00304ea00) Stream added, broadcasting: 1
I0209 14:38:01.953064       8 log.go:172] (0xc0028b8160) Reply frame received for 1
I0209 14:38:01.953200       8 log.go:172] (0xc0028b8160) (0xc0010f1d60) Create stream
I0209 14:38:01.953226       8 log.go:172] (0xc0028b8160) (0xc0010f1d60) Stream added, broadcasting: 3
I0209 14:38:01.955171       8 log.go:172] (0xc0028b8160) Reply frame received for 3
I0209 14:38:01.955202       8 log.go:172] (0xc0028b8160) (0xc00304eaa0) Create stream
I0209 14:38:01.955209       8 log.go:172] (0xc0028b8160) (0xc00304eaa0) Stream added, broadcasting: 5
I0209 14:38:01.956836       8 log.go:172] (0xc0028b8160) Reply frame received for 5
I0209 14:38:02.168935       8 log.go:172] (0xc0028b8160) Data frame received for 3
I0209 14:38:02.169206       8 log.go:172] (0xc0010f1d60) (3) Data frame handling
I0209 14:38:02.169281       8 log.go:172] (0xc0010f1d60) (3) Data frame sent
I0209 14:38:02.371797       8 log.go:172] (0xc0028b8160) Data frame received for 1
I0209 14:38:02.372145       8 log.go:172] (0xc0028b8160) (0xc0010f1d60) Stream removed, broadcasting: 3
I0209 14:38:02.372207       8 log.go:172] (0xc0028b8160) (0xc00304eaa0) Stream removed, broadcasting: 5
I0209 14:38:02.372253       8 log.go:172] (0xc00304ea00) (1) Data frame handling
I0209 14:38:02.372275       8 log.go:172] (0xc00304ea00) (1) Data frame sent
I0209 14:38:02.372284       8 log.go:172] (0xc0028b8160) (0xc00304ea00) Stream removed, broadcasting: 1
I0209 14:38:02.372300       8 log.go:172] (0xc0028b8160) Go away received
I0209 14:38:02.372536       8 log.go:172] (0xc0028b8160) (0xc00304ea00) Stream removed, broadcasting: 1
I0209 14:38:02.372543       8 log.go:172] (0xc0028b8160) (0xc0010f1d60) Stream removed, broadcasting: 3
I0209 14:38:02.372547       8 log.go:172] (0xc0028b8160) (0xc00304eaa0) Stream removed, broadcasting: 5
Feb  9 14:38:02.372: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 14:38:02.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-7308" for this suite.
Feb  9 14:38:26.446: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 14:38:26.588: INFO: namespace pod-network-test-7308 deletion completed in 24.202321365s

• [SLOW TEST:63.664 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 14:38:26.589: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-b8eba664-5f8c-4a24-aafe-a6b1b8b3c8cc
STEP: Creating a pod to test consume configMaps
Feb  9 14:38:26.723: INFO: Waiting up to 5m0s for pod "pod-configmaps-d2b89104-38d5-479b-90e8-a329212f26b4" in namespace "configmap-3435" to be "success or failure"
Feb  9 14:38:26.743: INFO: Pod "pod-configmaps-d2b89104-38d5-479b-90e8-a329212f26b4": Phase="Pending", Reason="", readiness=false. Elapsed: 19.465985ms
Feb  9 14:38:28.750: INFO: Pod "pod-configmaps-d2b89104-38d5-479b-90e8-a329212f26b4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027038599s
Feb  9 14:38:30.803: INFO: Pod "pod-configmaps-d2b89104-38d5-479b-90e8-a329212f26b4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.079134156s
Feb  9 14:38:32.812: INFO: Pod "pod-configmaps-d2b89104-38d5-479b-90e8-a329212f26b4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.088376805s
Feb  9 14:38:34.828: INFO: Pod "pod-configmaps-d2b89104-38d5-479b-90e8-a329212f26b4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.104655308s
Feb  9 14:38:36.842: INFO: Pod "pod-configmaps-d2b89104-38d5-479b-90e8-a329212f26b4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.118344906s
STEP: Saw pod success
Feb  9 14:38:36.842: INFO: Pod "pod-configmaps-d2b89104-38d5-479b-90e8-a329212f26b4" satisfied condition "success or failure"
Feb  9 14:38:36.873: INFO: Trying to get logs from node iruya-node pod pod-configmaps-d2b89104-38d5-479b-90e8-a329212f26b4 container configmap-volume-test: 
STEP: delete the pod
Feb  9 14:38:36.951: INFO: Waiting for pod pod-configmaps-d2b89104-38d5-479b-90e8-a329212f26b4 to disappear
Feb  9 14:38:36.958: INFO: Pod pod-configmaps-d2b89104-38d5-479b-90e8-a329212f26b4 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 14:38:36.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3435" for this suite.
Feb  9 14:38:43.059: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 14:38:43.156: INFO: namespace configmap-3435 deletion completed in 6.192043367s

• [SLOW TEST:16.567 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 14:38:43.156: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override arguments
Feb  9 14:38:43.351: INFO: Waiting up to 5m0s for pod "client-containers-7d62ce8f-0e42-4028-b23f-3305b5512c45" in namespace "containers-9341" to be "success or failure"
Feb  9 14:38:43.359: INFO: Pod "client-containers-7d62ce8f-0e42-4028-b23f-3305b5512c45": Phase="Pending", Reason="", readiness=false. Elapsed: 7.572293ms
Feb  9 14:38:45.372: INFO: Pod "client-containers-7d62ce8f-0e42-4028-b23f-3305b5512c45": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020317376s
Feb  9 14:38:47.380: INFO: Pod "client-containers-7d62ce8f-0e42-4028-b23f-3305b5512c45": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028202167s
Feb  9 14:38:49.389: INFO: Pod "client-containers-7d62ce8f-0e42-4028-b23f-3305b5512c45": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037599445s
Feb  9 14:38:51.399: INFO: Pod "client-containers-7d62ce8f-0e42-4028-b23f-3305b5512c45": Phase="Pending", Reason="", readiness=false. Elapsed: 8.047575911s
Feb  9 14:38:53.411: INFO: Pod "client-containers-7d62ce8f-0e42-4028-b23f-3305b5512c45": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.059341773s
STEP: Saw pod success
Feb  9 14:38:53.411: INFO: Pod "client-containers-7d62ce8f-0e42-4028-b23f-3305b5512c45" satisfied condition "success or failure"
Feb  9 14:38:53.416: INFO: Trying to get logs from node iruya-node pod client-containers-7d62ce8f-0e42-4028-b23f-3305b5512c45 container test-container: 
STEP: delete the pod
Feb  9 14:38:53.648: INFO: Waiting for pod client-containers-7d62ce8f-0e42-4028-b23f-3305b5512c45 to disappear
Feb  9 14:38:53.658: INFO: Pod client-containers-7d62ce8f-0e42-4028-b23f-3305b5512c45 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 14:38:53.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-9341" for this suite.
Feb  9 14:38:59.760: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 14:38:59.933: INFO: namespace containers-9341 deletion completed in 6.267603225s

• [SLOW TEST:16.777 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 14:38:59.935: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-2243
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-2243
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-2243
Feb  9 14:39:00.130: INFO: Found 0 stateful pods, waiting for 1
Feb  9 14:39:10.144: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Feb  9 14:39:10.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2243 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  9 14:39:10.806: INFO: stderr: "I0209 14:39:10.368116    2093 log.go:172] (0xc0006eca50) (0xc0005ec640) Create stream\nI0209 14:39:10.368282    2093 log.go:172] (0xc0006eca50) (0xc0005ec640) Stream added, broadcasting: 1\nI0209 14:39:10.376584    2093 log.go:172] (0xc0006eca50) Reply frame received for 1\nI0209 14:39:10.376699    2093 log.go:172] (0xc0006eca50) (0xc000816000) Create stream\nI0209 14:39:10.376757    2093 log.go:172] (0xc0006eca50) (0xc000816000) Stream added, broadcasting: 3\nI0209 14:39:10.379988    2093 log.go:172] (0xc0006eca50) Reply frame received for 3\nI0209 14:39:10.380027    2093 log.go:172] (0xc0006eca50) (0xc0006a01e0) Create stream\nI0209 14:39:10.380068    2093 log.go:172] (0xc0006eca50) (0xc0006a01e0) Stream added, broadcasting: 5\nI0209 14:39:10.383033    2093 log.go:172] (0xc0006eca50) Reply frame received for 5\nI0209 14:39:10.555750    2093 log.go:172] (0xc0006eca50) Data frame received for 5\nI0209 14:39:10.555868    2093 log.go:172] (0xc0006a01e0) (5) Data frame handling\nI0209 14:39:10.555895    2093 log.go:172] (0xc0006a01e0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0209 14:39:10.623206    2093 log.go:172] (0xc0006eca50) Data frame received for 3\nI0209 14:39:10.623270    2093 log.go:172] (0xc000816000) (3) Data frame handling\nI0209 14:39:10.623301    2093 log.go:172] (0xc000816000) (3) Data frame sent\nI0209 14:39:10.797383    2093 log.go:172] (0xc0006eca50) (0xc0006a01e0) Stream removed, broadcasting: 5\nI0209 14:39:10.797574    2093 log.go:172] (0xc0006eca50) Data frame received for 1\nI0209 14:39:10.797725    2093 log.go:172] (0xc0006eca50) (0xc000816000) Stream removed, broadcasting: 3\nI0209 14:39:10.797816    2093 log.go:172] (0xc0005ec640) (1) Data frame handling\nI0209 14:39:10.797881    2093 log.go:172] (0xc0005ec640) (1) Data frame sent\nI0209 14:39:10.797923    2093 log.go:172] (0xc0006eca50) (0xc0005ec640) Stream removed, broadcasting: 1\nI0209 14:39:10.797958    2093 log.go:172] (0xc0006eca50) Go away received\nI0209 14:39:10.798906    2093 log.go:172] (0xc0006eca50) (0xc0005ec640) Stream removed, broadcasting: 1\nI0209 14:39:10.798920    2093 log.go:172] (0xc0006eca50) (0xc000816000) Stream removed, broadcasting: 3\nI0209 14:39:10.798931    2093 log.go:172] (0xc0006eca50) (0xc0006a01e0) Stream removed, broadcasting: 5\n"
Feb  9 14:39:10.807: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  9 14:39:10.807: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  9 14:39:10.813: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Feb  9 14:39:20.822: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb  9 14:39:20.822: INFO: Waiting for statefulset status.replicas updated to 0
Feb  9 14:39:20.856: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999703s
Feb  9 14:39:21.873: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.981022226s
Feb  9 14:39:22.884: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.964364421s
Feb  9 14:39:23.894: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.953943899s
Feb  9 14:39:24.903: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.943405045s
Feb  9 14:39:25.913: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.934532579s
Feb  9 14:39:26.921: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.924511531s
Feb  9 14:39:27.929: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.917097859s
Feb  9 14:39:28.943: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.908918563s
Feb  9 14:39:29.955: INFO: Verifying statefulset ss doesn't scale past 1 for another 894.919131ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-2243
Feb  9 14:39:30.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2243 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  9 14:39:31.548: INFO: stderr: "I0209 14:39:31.194817    2112 log.go:172] (0xc0009c4000) (0xc0009a6140) Create stream\nI0209 14:39:31.194977    2112 log.go:172] (0xc0009c4000) (0xc0009a6140) Stream added, broadcasting: 1\nI0209 14:39:31.202021    2112 log.go:172] (0xc0009c4000) Reply frame received for 1\nI0209 14:39:31.202064    2112 log.go:172] (0xc0009c4000) (0xc000507c20) Create stream\nI0209 14:39:31.202082    2112 log.go:172] (0xc0009c4000) (0xc000507c20) Stream added, broadcasting: 3\nI0209 14:39:31.203950    2112 log.go:172] (0xc0009c4000) Reply frame received for 3\nI0209 14:39:31.203980    2112 log.go:172] (0xc0009c4000) (0xc0009a61e0) Create stream\nI0209 14:39:31.203991    2112 log.go:172] (0xc0009c4000) (0xc0009a61e0) Stream added, broadcasting: 5\nI0209 14:39:31.205426    2112 log.go:172] (0xc0009c4000) Reply frame received for 5\nI0209 14:39:31.354953    2112 log.go:172] (0xc0009c4000) Data frame received for 5\nI0209 14:39:31.355029    2112 log.go:172] (0xc0009a61e0) (5) Data frame handling\nI0209 14:39:31.355078    2112 log.go:172] (0xc0009a61e0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0209 14:39:31.361652    2112 log.go:172] (0xc0009c4000) Data frame received for 3\nI0209 14:39:31.361680    2112 log.go:172] (0xc000507c20) (3) Data frame handling\nI0209 14:39:31.361701    2112 log.go:172] (0xc000507c20) (3) Data frame sent\nI0209 14:39:31.534604    2112 log.go:172] (0xc0009c4000) (0xc000507c20) Stream removed, broadcasting: 3\nI0209 14:39:31.534777    2112 log.go:172] (0xc0009c4000) Data frame received for 1\nI0209 14:39:31.534846    2112 log.go:172] (0xc0009a6140) (1) Data frame handling\nI0209 14:39:31.534894    2112 log.go:172] (0xc0009a6140) (1) Data frame sent\nI0209 14:39:31.535101    2112 log.go:172] (0xc0009c4000) (0xc0009a61e0) Stream removed, broadcasting: 5\nI0209 14:39:31.535243    2112 log.go:172] (0xc0009c4000) (0xc0009a6140) Stream removed, broadcasting: 1\nI0209 14:39:31.535310    2112 log.go:172] (0xc0009c4000) Go away received\nI0209 14:39:31.536398    2112 log.go:172] (0xc0009c4000) (0xc0009a6140) Stream removed, broadcasting: 1\nI0209 14:39:31.536423    2112 log.go:172] (0xc0009c4000) (0xc000507c20) Stream removed, broadcasting: 3\nI0209 14:39:31.536435    2112 log.go:172] (0xc0009c4000) (0xc0009a61e0) Stream removed, broadcasting: 5\n"
Feb  9 14:39:31.549: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  9 14:39:31.549: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  9 14:39:31.560: INFO: Found 1 stateful pods, waiting for 3
Feb  9 14:39:41.581: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  9 14:39:41.581: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  9 14:39:41.581: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb  9 14:39:51.571: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  9 14:39:51.571: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  9 14:39:51.571: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Feb  9 14:39:51.581: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2243 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  9 14:39:54.316: INFO: stderr: "I0209 14:39:53.930634    2132 log.go:172] (0xc000b24580) (0xc000402960) Create stream\nI0209 14:39:53.930800    2132 log.go:172] (0xc000b24580) (0xc000402960) Stream added, broadcasting: 1\nI0209 14:39:53.943742    2132 log.go:172] (0xc000b24580) Reply frame received for 1\nI0209 14:39:53.943939    2132 log.go:172] (0xc000b24580) (0xc000694000) Create stream\nI0209 14:39:53.943974    2132 log.go:172] (0xc000b24580) (0xc000694000) Stream added, broadcasting: 3\nI0209 14:39:53.948333    2132 log.go:172] (0xc000b24580) Reply frame received for 3\nI0209 14:39:53.948535    2132 log.go:172] (0xc000b24580) (0xc0006f20a0) Create stream\nI0209 14:39:53.948559    2132 log.go:172] (0xc000b24580) (0xc0006f20a0) Stream added, broadcasting: 5\nI0209 14:39:53.953942    2132 log.go:172] (0xc000b24580) Reply frame received for 5\nI0209 14:39:54.111003    2132 log.go:172] (0xc000b24580) Data frame received for 3\nI0209 14:39:54.111116    2132 log.go:172] (0xc000694000) (3) Data frame handling\nI0209 14:39:54.111181    2132 log.go:172] (0xc000694000) (3) Data frame sent\nI0209 14:39:54.111246    2132 log.go:172] (0xc000b24580) Data frame received for 5\nI0209 14:39:54.111274    2132 log.go:172] (0xc0006f20a0) (5) Data frame handling\nI0209 14:39:54.111314    2132 log.go:172] (0xc0006f20a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0209 14:39:54.304794    2132 log.go:172] (0xc000b24580) Data frame received for 1\nI0209 14:39:54.304903    2132 log.go:172] (0xc000b24580) (0xc000694000) Stream removed, broadcasting: 3\nI0209 14:39:54.304961    2132 log.go:172] (0xc000402960) (1) Data frame handling\nI0209 14:39:54.304995    2132 log.go:172] (0xc000402960) (1) Data frame sent\nI0209 14:39:54.305124    2132 log.go:172] (0xc000b24580) (0xc0006f20a0) Stream removed, broadcasting: 5\nI0209 14:39:54.305162    2132 log.go:172] (0xc000b24580) (0xc000402960) Stream removed, broadcasting: 1\nI0209 14:39:54.305183    2132 log.go:172] (0xc000b24580) Go away received\nI0209 14:39:54.306099    2132 log.go:172] (0xc000b24580) (0xc000402960) Stream removed, broadcasting: 1\nI0209 14:39:54.306118    2132 log.go:172] (0xc000b24580) (0xc000694000) Stream removed, broadcasting: 3\nI0209 14:39:54.306131    2132 log.go:172] (0xc000b24580) (0xc0006f20a0) Stream removed, broadcasting: 5\n"
Feb  9 14:39:54.317: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  9 14:39:54.317: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  9 14:39:54.317: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2243 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  9 14:39:54.950: INFO: stderr: "I0209 14:39:54.553939    2165 log.go:172] (0xc000116e70) (0xc000988640) Create stream\nI0209 14:39:54.554105    2165 log.go:172] (0xc000116e70) (0xc000988640) Stream added, broadcasting: 1\nI0209 14:39:54.561338    2165 log.go:172] (0xc000116e70) Reply frame received for 1\nI0209 14:39:54.561408    2165 log.go:172] (0xc000116e70) (0xc000934000) Create stream\nI0209 14:39:54.561421    2165 log.go:172] (0xc000116e70) (0xc000934000) Stream added, broadcasting: 3\nI0209 14:39:54.563739    2165 log.go:172] (0xc000116e70) Reply frame received for 3\nI0209 14:39:54.563775    2165 log.go:172] (0xc000116e70) (0xc000606140) Create stream\nI0209 14:39:54.563809    2165 log.go:172] (0xc000116e70) (0xc000606140) Stream added, broadcasting: 5\nI0209 14:39:54.570987    2165 log.go:172] (0xc000116e70) Reply frame received for 5\nI0209 14:39:54.759727    2165 log.go:172] (0xc000116e70) Data frame received for 5\nI0209 14:39:54.760334    2165 log.go:172] (0xc000606140) (5) Data frame handling\nI0209 14:39:54.760435    2165 log.go:172] (0xc000606140) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0209 14:39:54.802514    2165 log.go:172] (0xc000116e70) Data frame received for 3\nI0209 14:39:54.802957    2165 log.go:172] (0xc000934000) (3) Data frame handling\nI0209 14:39:54.802997    2165 log.go:172] (0xc000934000) (3) Data frame sent\nI0209 14:39:54.939988    2165 log.go:172] (0xc000116e70) (0xc000934000) Stream removed, broadcasting: 3\nI0209 14:39:54.940117    2165 log.go:172] (0xc000116e70) Data frame received for 1\nI0209 14:39:54.940137    2165 log.go:172] (0xc000988640) (1) Data frame handling\nI0209 14:39:54.940153    2165 log.go:172] (0xc000116e70) (0xc000606140) Stream removed, broadcasting: 5\nI0209 14:39:54.940209    2165 log.go:172] (0xc000988640) (1) Data frame sent\nI0209 14:39:54.940225    2165 log.go:172] (0xc000116e70) (0xc000988640) Stream removed, broadcasting: 1\nI0209 14:39:54.940353    2165 log.go:172] (0xc000116e70) Go away received\nI0209 14:39:54.941022    2165 log.go:172] (0xc000116e70) (0xc000988640) Stream removed, broadcasting: 1\nI0209 14:39:54.941051    2165 log.go:172] (0xc000116e70) (0xc000934000) Stream removed, broadcasting: 3\nI0209 14:39:54.941067    2165 log.go:172] (0xc000116e70) (0xc000606140) Stream removed, broadcasting: 5\n"
Feb  9 14:39:54.951: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  9 14:39:54.951: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  9 14:39:54.951: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2243 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  9 14:39:55.430: INFO: stderr: "I0209 14:39:55.107131    2185 log.go:172] (0xc000684a50) (0xc00073a8c0) Create stream\nI0209 14:39:55.107340    2185 log.go:172] (0xc000684a50) (0xc00073a8c0) Stream added, broadcasting: 1\nI0209 14:39:55.118983    2185 log.go:172] (0xc000684a50) Reply frame received for 1\nI0209 14:39:55.119418    2185 log.go:172] (0xc000684a50) (0xc00073a960) Create stream\nI0209 14:39:55.119501    2185 log.go:172] (0xc000684a50) (0xc00073a960) Stream added, broadcasting: 3\nI0209 14:39:55.122899    2185 log.go:172] (0xc000684a50) Reply frame received for 3\nI0209 14:39:55.122939    2185 log.go:172] (0xc000684a50) (0xc000692000) Create stream\nI0209 14:39:55.122975    2185 log.go:172] (0xc000684a50) (0xc000692000) Stream added, broadcasting: 5\nI0209 14:39:55.131077    2185 log.go:172] (0xc000684a50) Reply frame received for 5\nI0209 14:39:55.233672    2185 log.go:172] (0xc000684a50) Data frame received for 5\nI0209 14:39:55.234053    2185 log.go:172] (0xc000692000) (5) Data frame handling\nI0209 14:39:55.234134    2185 log.go:172] (0xc000692000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0209 14:39:55.264009    2185 log.go:172] (0xc000684a50) Data frame received for 3\nI0209 14:39:55.264043    2185 log.go:172] (0xc00073a960) (3) Data frame handling\nI0209 14:39:55.264059    2185 log.go:172] (0xc00073a960) (3) Data frame sent\nI0209 14:39:55.418393    2185 log.go:172] (0xc000684a50) (0xc00073a960) Stream removed, broadcasting: 3\nI0209 14:39:55.418724    2185 log.go:172] (0xc000684a50) (0xc000692000) Stream removed, broadcasting: 5\nI0209 14:39:55.418815    2185 log.go:172] (0xc000684a50) Data frame received for 1\nI0209 14:39:55.418919    2185 log.go:172] (0xc00073a8c0) (1) Data frame handling\nI0209 14:39:55.419002    2185 log.go:172] (0xc00073a8c0) (1) Data frame sent\nI0209 14:39:55.419072    2185 log.go:172] (0xc000684a50) (0xc00073a8c0) Stream removed, broadcasting: 1\nI0209 14:39:55.419115    2185 log.go:172] (0xc000684a50) Go away received\nI0209 14:39:55.419872    2185 log.go:172] (0xc000684a50) (0xc00073a8c0) Stream removed, broadcasting: 1\nI0209 14:39:55.419896    2185 log.go:172] (0xc000684a50) (0xc00073a960) Stream removed, broadcasting: 3\nI0209 14:39:55.419915    2185 log.go:172] (0xc000684a50) (0xc000692000) Stream removed, broadcasting: 5\n"
Feb  9 14:39:55.430: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  9 14:39:55.430: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  9 14:39:55.430: INFO: Waiting for statefulset status.replicas updated to 0
Feb  9 14:39:55.446: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb  9 14:39:55.446: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Feb  9 14:39:55.446: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Feb  9 14:39:55.463: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.99999978s
Feb  9 14:39:56.480: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.989268192s
Feb  9 14:39:57.488: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.97216748s
Feb  9 14:39:58.500: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.964316823s
Feb  9 14:39:59.509: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.952438242s
Feb  9 14:40:00.523: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.943736361s
Feb  9 14:40:01.531: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.929395658s
Feb  9 14:40:02.550: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.920786488s
Feb  9 14:40:03.615: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.901997852s
Feb  9 14:40:04.629: INFO: Verifying statefulset ss doesn't scale past 3 for another 836.928557ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-2243
Feb  9 14:40:05.639: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2243 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  9 14:40:06.178: INFO: stderr: "I0209 14:40:05.913793    2203 log.go:172] (0xc000904420) (0xc0008f8640) Create stream\nI0209 14:40:05.913936    2203 log.go:172] (0xc000904420) (0xc0008f8640) Stream added, broadcasting: 1\nI0209 14:40:05.924635    2203 log.go:172] (0xc000904420) Reply frame received for 1\nI0209 14:40:05.924684    2203 log.go:172] (0xc000904420) (0xc000a8c000) Create stream\nI0209 14:40:05.924710    2203 log.go:172] (0xc000904420) (0xc000a8c000) Stream added, broadcasting: 3\nI0209 14:40:05.926619    2203 log.go:172] (0xc000904420) Reply frame received for 3\nI0209 14:40:05.926648    2203 log.go:172] (0xc000904420) (0xc0008f86e0) Create stream\nI0209 14:40:05.926655    2203 log.go:172] (0xc000904420) (0xc0008f86e0) Stream added, broadcasting: 5\nI0209 14:40:05.928022    2203 log.go:172] (0xc000904420) Reply frame received for 5\nI0209 14:40:06.038468    2203 log.go:172] (0xc000904420) Data frame received for 5\nI0209 14:40:06.038593    2203 log.go:172] (0xc0008f86e0) (5) Data frame handling\nI0209 14:40:06.038616    2203 log.go:172] (0xc0008f86e0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0209 14:40:06.041350    2203 log.go:172] (0xc000904420) Data frame received for 3\nI0209 14:40:06.041422    2203 log.go:172] (0xc000a8c000) (3) Data frame handling\nI0209 14:40:06.041674    2203 log.go:172] (0xc000a8c000) (3) Data frame sent\nI0209 14:40:06.168223    2203 log.go:172] (0xc000904420) (0xc000a8c000) Stream removed, broadcasting: 3\nI0209 14:40:06.168327    2203 log.go:172] (0xc000904420) Data frame received for 1\nI0209 14:40:06.168375    2203 log.go:172] (0xc0008f8640) (1) Data frame handling\nI0209 14:40:06.168420    2203 log.go:172] (0xc0008f8640) (1) Data frame sent\nI0209 14:40:06.168464    2203 log.go:172] (0xc000904420) (0xc0008f8640) Stream removed, broadcasting: 1\nI0209 14:40:06.168701    2203 log.go:172] (0xc000904420) (0xc0008f86e0) Stream removed, broadcasting: 5\nI0209 14:40:06.168750    2203 log.go:172] (0xc000904420) Go away received\nI0209 14:40:06.169364    2203 log.go:172] (0xc000904420) (0xc0008f8640) Stream removed, broadcasting: 1\nI0209 14:40:06.169382    2203 log.go:172] (0xc000904420) (0xc000a8c000) Stream removed, broadcasting: 3\nI0209 14:40:06.169390    2203 log.go:172] (0xc000904420) (0xc0008f86e0) Stream removed, broadcasting: 5\n"
Feb  9 14:40:06.178: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  9 14:40:06.178: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  9 14:40:06.179: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2243 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  9 14:40:06.531: INFO: stderr: "I0209 14:40:06.320090    2223 log.go:172] (0xc0005e4420) (0xc0005fa6e0) Create stream\nI0209 14:40:06.320203    2223 log.go:172] (0xc0005e4420) (0xc0005fa6e0) Stream added, broadcasting: 1\nI0209 14:40:06.323938    2223 log.go:172] (0xc0005e4420) Reply frame received for 1\nI0209 14:40:06.323970    2223 log.go:172] (0xc0005e4420) (0xc0005a21e0) Create stream\nI0209 14:40:06.323995    2223 log.go:172] (0xc0005e4420) (0xc0005a21e0) Stream added, broadcasting: 3\nI0209 14:40:06.325382    2223 log.go:172] (0xc0005e4420) Reply frame received for 3\nI0209 14:40:06.325437    2223 log.go:172] (0xc0005e4420) (0xc000822000) Create stream\nI0209 14:40:06.325460    2223 log.go:172] (0xc0005e4420) (0xc000822000) Stream added, broadcasting: 5\nI0209 14:40:06.326739    2223 log.go:172] (0xc0005e4420) Reply frame received for 5\nI0209 14:40:06.420135    2223 log.go:172] (0xc0005e4420) Data frame received for 3\nI0209 14:40:06.420252    2223 log.go:172] (0xc0005a21e0) (3) Data frame handling\nI0209 14:40:06.420272    2223 log.go:172] (0xc0005a21e0) (3) Data frame sent\nI0209 14:40:06.420447    2223 log.go:172] (0xc0005e4420) Data frame received for 5\nI0209 14:40:06.420502    2223 log.go:172] (0xc000822000) (5) Data frame handling\nI0209 14:40:06.420531    2223 log.go:172] (0xc000822000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0209 14:40:06.522668    2223 log.go:172] (0xc0005e4420) (0xc000822000) Stream removed, broadcasting: 5\nI0209 14:40:06.522785    2223 log.go:172] (0xc0005e4420) Data frame received for 1\nI0209 14:40:06.522803    2223 log.go:172] (0xc0005fa6e0) (1) Data frame handling\nI0209 14:40:06.522821    2223 log.go:172] (0xc0005fa6e0) (1) Data frame sent\nI0209 14:40:06.522829    2223 log.go:172] (0xc0005e4420) (0xc0005fa6e0) Stream removed, broadcasting: 1\nI0209 14:40:06.523299    2223 log.go:172] (0xc0005e4420) (0xc0005a21e0) Stream removed, broadcasting: 3\nI0209 14:40:06.523361    2223 log.go:172] (0xc0005e4420) (0xc0005fa6e0) Stream removed, broadcasting: 1\nI0209 14:40:06.523375    2223 log.go:172] (0xc0005e4420) (0xc0005a21e0) Stream removed, broadcasting: 3\nI0209 14:40:06.523387    2223 log.go:172] (0xc0005e4420) (0xc000822000) Stream removed, broadcasting: 5\n"
Feb  9 14:40:06.531: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  9 14:40:06.531: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  9 14:40:06.531: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2243 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  9 14:40:06.976: INFO: stderr: "I0209 14:40:06.696386    2240 log.go:172] (0xc000116dc0) (0xc00064c780) Create stream\nI0209 14:40:06.696583    2240 log.go:172] (0xc000116dc0) (0xc00064c780) Stream added, broadcasting: 1\nI0209 14:40:06.703194    2240 log.go:172] (0xc000116dc0) Reply frame received for 1\nI0209 14:40:06.703271    2240 log.go:172] (0xc000116dc0) (0xc0008d4000) Create stream\nI0209 14:40:06.703281    2240 log.go:172] (0xc000116dc0) (0xc0008d4000) Stream added, broadcasting: 3\nI0209 14:40:06.704679    2240 log.go:172] (0xc000116dc0) Reply frame received for 3\nI0209 14:40:06.704737    2240 log.go:172] (0xc000116dc0) (0xc00078a000) Create stream\nI0209 14:40:06.704755    2240 log.go:172] (0xc000116dc0) (0xc00078a000) Stream added, broadcasting: 5\nI0209 14:40:06.707204    2240 log.go:172] (0xc000116dc0) Reply frame received for 5\nI0209 14:40:06.806150    2240 log.go:172] (0xc000116dc0) Data frame received for 5\nI0209 14:40:06.806231    2240 log.go:172] (0xc00078a000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0209 14:40:06.806302    2240 log.go:172] (0xc000116dc0) Data frame received for 3\nI0209 14:40:06.806322    2240 log.go:172] (0xc0008d4000) (3) Data frame handling\nI0209 14:40:06.806329    2240 log.go:172] (0xc0008d4000) (3) Data frame sent\nI0209 14:40:06.806349    2240 log.go:172] (0xc00078a000) (5) Data frame sent\nI0209 14:40:06.966131    2240 log.go:172] (0xc000116dc0) (0xc0008d4000) Stream removed, broadcasting: 3\nI0209 14:40:06.966280    2240 log.go:172] (0xc000116dc0) Data frame received for 1\nI0209 14:40:06.966319    2240 log.go:172] (0xc000116dc0) (0xc00078a000) Stream removed, broadcasting: 5\nI0209 14:40:06.966345    2240 log.go:172] (0xc00064c780) (1) Data frame handling\nI0209 14:40:06.966364    2240 log.go:172] (0xc00064c780) (1) Data frame sent\nI0209 14:40:06.966378    2240 log.go:172] (0xc000116dc0) (0xc00064c780) Stream removed, broadcasting: 1\nI0209 14:40:06.966393    2240 log.go:172] (0xc000116dc0) Go away received\nI0209 14:40:06.967208    2240 log.go:172] (0xc000116dc0) (0xc00064c780) Stream removed, broadcasting: 1\nI0209 14:40:06.967241    2240 log.go:172] (0xc000116dc0) (0xc0008d4000) Stream removed, broadcasting: 3\nI0209 14:40:06.967250    2240 log.go:172] (0xc000116dc0) (0xc00078a000) Stream removed, broadcasting: 5\n"
Feb  9 14:40:06.976: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  9 14:40:06.976: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  9 14:40:06.976: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Feb  9 14:40:37.007: INFO: Deleting all statefulset in ns statefulset-2243
Feb  9 14:40:37.013: INFO: Scaling statefulset ss to 0
Feb  9 14:40:37.028: INFO: Waiting for statefulset status.replicas updated to 0
Feb  9 14:40:37.031: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 14:40:37.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-2243" for this suite.
Feb  9 14:40:43.099: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 14:40:43.258: INFO: namespace statefulset-2243 deletion completed in 6.185980601s

• [SLOW TEST:103.323 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 14:40:43.259: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Feb  9 14:40:43.375: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 14:41:02.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-5564" for this suite.
Feb  9 14:41:24.786: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 14:41:24.875: INFO: namespace init-container-5564 deletion completed in 22.110347838s

• [SLOW TEST:41.617 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 14:41:24.876: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Feb  9 14:41:24.949: INFO: Waiting up to 5m0s for pod "pod-2b6e6756-4a54-4d6f-a5a8-4b4268eb0641" in namespace "emptydir-2833" to be "success or failure"
Feb  9 14:41:24.958: INFO: Pod "pod-2b6e6756-4a54-4d6f-a5a8-4b4268eb0641": Phase="Pending", Reason="", readiness=false. Elapsed: 8.55966ms
Feb  9 14:41:26.967: INFO: Pod "pod-2b6e6756-4a54-4d6f-a5a8-4b4268eb0641": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017412975s
Feb  9 14:41:28.977: INFO: Pod "pod-2b6e6756-4a54-4d6f-a5a8-4b4268eb0641": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028091855s
Feb  9 14:41:30.985: INFO: Pod "pod-2b6e6756-4a54-4d6f-a5a8-4b4268eb0641": Phase="Pending", Reason="", readiness=false. Elapsed: 6.035443805s
Feb  9 14:41:32.992: INFO: Pod "pod-2b6e6756-4a54-4d6f-a5a8-4b4268eb0641": Phase="Pending", Reason="", readiness=false. Elapsed: 8.042466174s
Feb  9 14:41:35.004: INFO: Pod "pod-2b6e6756-4a54-4d6f-a5a8-4b4268eb0641": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.054739643s
STEP: Saw pod success
Feb  9 14:41:35.004: INFO: Pod "pod-2b6e6756-4a54-4d6f-a5a8-4b4268eb0641" satisfied condition "success or failure"
Feb  9 14:41:35.007: INFO: Trying to get logs from node iruya-node pod pod-2b6e6756-4a54-4d6f-a5a8-4b4268eb0641 container test-container: 
STEP: delete the pod
Feb  9 14:41:35.054: INFO: Waiting for pod pod-2b6e6756-4a54-4d6f-a5a8-4b4268eb0641 to disappear
Feb  9 14:41:35.059: INFO: Pod pod-2b6e6756-4a54-4d6f-a5a8-4b4268eb0641 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 14:41:35.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2833" for this suite.
Feb  9 14:41:41.151: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 14:41:41.291: INFO: namespace emptydir-2833 deletion completed in 6.225253702s

• [SLOW TEST:16.415 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 14:41:41.292: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Feb  9 14:41:41.392: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 14:41:56.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-4393" for this suite.
Feb  9 14:42:02.963: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 14:42:03.101: INFO: namespace init-container-4393 deletion completed in 6.171493703s

• [SLOW TEST:21.810 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 14:42:03.102: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Feb  9 14:42:03.203: INFO: Waiting up to 5m0s for pod "pod-99df0a48-608b-4140-9d8b-6d5e575a8e14" in namespace "emptydir-7360" to be "success or failure"
Feb  9 14:42:03.223: INFO: Pod "pod-99df0a48-608b-4140-9d8b-6d5e575a8e14": Phase="Pending", Reason="", readiness=false. Elapsed: 20.019605ms
Feb  9 14:42:05.232: INFO: Pod "pod-99df0a48-608b-4140-9d8b-6d5e575a8e14": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029084445s
Feb  9 14:42:07.241: INFO: Pod "pod-99df0a48-608b-4140-9d8b-6d5e575a8e14": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037959457s
Feb  9 14:42:09.252: INFO: Pod "pod-99df0a48-608b-4140-9d8b-6d5e575a8e14": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048848835s
Feb  9 14:42:11.263: INFO: Pod "pod-99df0a48-608b-4140-9d8b-6d5e575a8e14": Phase="Pending", Reason="", readiness=false. Elapsed: 8.060124059s
Feb  9 14:42:13.271: INFO: Pod "pod-99df0a48-608b-4140-9d8b-6d5e575a8e14": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.067681943s
STEP: Saw pod success
Feb  9 14:42:13.271: INFO: Pod "pod-99df0a48-608b-4140-9d8b-6d5e575a8e14" satisfied condition "success or failure"
Feb  9 14:42:13.275: INFO: Trying to get logs from node iruya-node pod pod-99df0a48-608b-4140-9d8b-6d5e575a8e14 container test-container: 
STEP: delete the pod
Feb  9 14:42:13.344: INFO: Waiting for pod pod-99df0a48-608b-4140-9d8b-6d5e575a8e14 to disappear
Feb  9 14:42:13.350: INFO: Pod pod-99df0a48-608b-4140-9d8b-6d5e575a8e14 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 14:42:13.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7360" for this suite.
Feb  9 14:42:19.419: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 14:42:19.548: INFO: namespace emptydir-7360 deletion completed in 6.191963044s

• [SLOW TEST:16.446 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 14:42:19.548: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-3ad0918f-b6d1-4d3a-be3a-b9c100379031
STEP: Creating a pod to test consume configMaps
Feb  9 14:42:19.652: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-39fa14f6-f5b5-40c4-b05a-9645af8397de" in namespace "projected-8873" to be "success or failure"
Feb  9 14:42:19.658: INFO: Pod "pod-projected-configmaps-39fa14f6-f5b5-40c4-b05a-9645af8397de": Phase="Pending", Reason="", readiness=false. Elapsed: 5.684604ms
Feb  9 14:42:21.665: INFO: Pod "pod-projected-configmaps-39fa14f6-f5b5-40c4-b05a-9645af8397de": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013275364s
Feb  9 14:42:23.678: INFO: Pod "pod-projected-configmaps-39fa14f6-f5b5-40c4-b05a-9645af8397de": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025334713s
Feb  9 14:42:25.685: INFO: Pod "pod-projected-configmaps-39fa14f6-f5b5-40c4-b05a-9645af8397de": Phase="Pending", Reason="", readiness=false. Elapsed: 6.033024658s
Feb  9 14:42:27.698: INFO: Pod "pod-projected-configmaps-39fa14f6-f5b5-40c4-b05a-9645af8397de": Phase="Pending", Reason="", readiness=false. Elapsed: 8.045874475s
Feb  9 14:42:29.708: INFO: Pod "pod-projected-configmaps-39fa14f6-f5b5-40c4-b05a-9645af8397de": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.055560017s
STEP: Saw pod success
Feb  9 14:42:29.708: INFO: Pod "pod-projected-configmaps-39fa14f6-f5b5-40c4-b05a-9645af8397de" satisfied condition "success or failure"
Feb  9 14:42:29.716: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-39fa14f6-f5b5-40c4-b05a-9645af8397de container projected-configmap-volume-test: 
STEP: delete the pod
Feb  9 14:42:29.891: INFO: Waiting for pod pod-projected-configmaps-39fa14f6-f5b5-40c4-b05a-9645af8397de to disappear
Feb  9 14:42:29.905: INFO: Pod pod-projected-configmaps-39fa14f6-f5b5-40c4-b05a-9645af8397de no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 14:42:29.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8873" for this suite.
Feb  9 14:42:35.941: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 14:42:36.043: INFO: namespace projected-8873 deletion completed in 6.128942327s

• [SLOW TEST:16.495 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 14:42:36.044: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-3527
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating stateful set ss in namespace statefulset-3527
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-3527
Feb  9 14:42:36.153: INFO: Found 0 stateful pods, waiting for 1
Feb  9 14:42:46.161: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false
Feb  9 14:42:56.173: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Feb  9 14:42:56.180: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3527 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  9 14:42:56.813: INFO: stderr: "I0209 14:42:56.351476    2257 log.go:172] (0xc0008b8370) (0xc00078a640) Create stream\nI0209 14:42:56.351649    2257 log.go:172] (0xc0008b8370) (0xc00078a640) Stream added, broadcasting: 1\nI0209 14:42:56.379788    2257 log.go:172] (0xc0008b8370) Reply frame received for 1\nI0209 14:42:56.379859    2257 log.go:172] (0xc0008b8370) (0xc0006c81e0) Create stream\nI0209 14:42:56.379870    2257 log.go:172] (0xc0008b8370) (0xc0006c81e0) Stream added, broadcasting: 3\nI0209 14:42:56.382158    2257 log.go:172] (0xc0008b8370) Reply frame received for 3\nI0209 14:42:56.382199    2257 log.go:172] (0xc0008b8370) (0xc00070e000) Create stream\nI0209 14:42:56.382209    2257 log.go:172] (0xc0008b8370) (0xc00070e000) Stream added, broadcasting: 5\nI0209 14:42:56.384351    2257 log.go:172] (0xc0008b8370) Reply frame received for 5\nI0209 14:42:56.636591    2257 log.go:172] (0xc0008b8370) Data frame received for 3\nI0209 14:42:56.636670    2257 log.go:172] (0xc0006c81e0) (3) Data frame handling\nI0209 14:42:56.636682    2257 log.go:172] (0xc0006c81e0) (3) Data frame sent\nI0209 14:42:56.636721    2257 log.go:172] (0xc0008b8370) Data frame received for 5\nI0209 14:42:56.636757    2257 log.go:172] (0xc00070e000) (5) Data frame handling\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0209 14:42:56.636855    2257 log.go:172] (0xc00070e000) (5) Data frame sent\nI0209 14:42:56.805771    2257 log.go:172] (0xc0008b8370) (0xc0006c81e0) Stream removed, broadcasting: 3\nI0209 14:42:56.805913    2257 log.go:172] (0xc0008b8370) Data frame received for 1\nI0209 14:42:56.805947    2257 log.go:172] (0xc0008b8370) (0xc00070e000) Stream removed, broadcasting: 5\nI0209 14:42:56.805992    2257 log.go:172] (0xc00078a640) (1) Data frame handling\nI0209 14:42:56.806013    2257 log.go:172] (0xc00078a640) (1) Data frame sent\nI0209 14:42:56.806047    2257 log.go:172] (0xc0008b8370) (0xc00078a640) Stream removed, broadcasting: 1\nI0209 14:42:56.806072    2257 log.go:172] (0xc0008b8370) Go away received\nI0209 14:42:56.806696    2257 log.go:172] (0xc0008b8370) (0xc00078a640) Stream removed, broadcasting: 1\nI0209 14:42:56.806726    2257 log.go:172] (0xc0008b8370) (0xc0006c81e0) Stream removed, broadcasting: 3\nI0209 14:42:56.806739    2257 log.go:172] (0xc0008b8370) (0xc00070e000) Stream removed, broadcasting: 5\n"
Feb  9 14:42:56.813: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  9 14:42:56.813: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  9 14:42:56.823: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Feb  9 14:43:06.836: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb  9 14:43:06.836: INFO: Waiting for statefulset status.replicas updated to 0
Feb  9 14:43:06.872: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Feb  9 14:43:06.872: INFO: ss-0  iruya-node  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:42:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:42:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:42:56 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:42:36 +0000 UTC  }]
Feb  9 14:43:06.872: INFO: 
Feb  9 14:43:06.872: INFO: StatefulSet ss has not reached scale 3, at 1
Feb  9 14:43:08.414: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.982096879s
Feb  9 14:43:09.684: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.439525783s
Feb  9 14:43:10.692: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.169491274s
Feb  9 14:43:11.704: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.161537384s
Feb  9 14:43:13.163: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.150143615s
Feb  9 14:43:14.291: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.691035929s
Feb  9 14:43:15.553: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.5628772s
Feb  9 14:43:16.633: INFO: Verifying statefulset ss doesn't scale past 3 for another 300.953531ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-3527
Feb  9 14:43:17.645: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3527 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  9 14:43:18.243: INFO: stderr: "I0209 14:43:17.958868    2274 log.go:172] (0xc00091c0b0) (0xc00090a0a0) Create stream\nI0209 14:43:17.959001    2274 log.go:172] (0xc00091c0b0) (0xc00090a0a0) Stream added, broadcasting: 1\nI0209 14:43:17.971780    2274 log.go:172] (0xc00091c0b0) Reply frame received for 1\nI0209 14:43:17.971840    2274 log.go:172] (0xc00091c0b0) (0xc000964000) Create stream\nI0209 14:43:17.971856    2274 log.go:172] (0xc00091c0b0) (0xc000964000) Stream added, broadcasting: 3\nI0209 14:43:17.975622    2274 log.go:172] (0xc00091c0b0) Reply frame received for 3\nI0209 14:43:17.975752    2274 log.go:172] (0xc00091c0b0) (0xc000650280) Create stream\nI0209 14:43:17.975767    2274 log.go:172] (0xc00091c0b0) (0xc000650280) Stream added, broadcasting: 5\nI0209 14:43:17.977010    2274 log.go:172] (0xc00091c0b0) Reply frame received for 5\nI0209 14:43:18.086203    2274 log.go:172] (0xc00091c0b0) Data frame received for 5\nI0209 14:43:18.086358    2274 log.go:172] (0xc000650280) (5) Data frame handling\nI0209 14:43:18.086392    2274 log.go:172] (0xc000650280) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0209 14:43:18.086420    2274 log.go:172] (0xc00091c0b0) Data frame received for 3\nI0209 14:43:18.086441    2274 log.go:172] (0xc000964000) (3) Data frame handling\nI0209 14:43:18.086457    2274 log.go:172] (0xc000964000) (3) Data frame sent\nI0209 14:43:18.230529    2274 log.go:172] (0xc00091c0b0) Data frame received for 1\nI0209 14:43:18.230725    2274 log.go:172] (0xc00091c0b0) (0xc000650280) Stream removed, broadcasting: 5\nI0209 14:43:18.230787    2274 log.go:172] (0xc00090a0a0) (1) Data frame handling\nI0209 14:43:18.230820    2274 log.go:172] (0xc00090a0a0) (1) Data frame sent\nI0209 14:43:18.230851    2274 log.go:172] (0xc00091c0b0) (0xc000964000) Stream removed, broadcasting: 3\nI0209 14:43:18.231099    2274 log.go:172] (0xc00091c0b0) (0xc00090a0a0) Stream removed, broadcasting: 1\nI0209 14:43:18.231185    2274 log.go:172] (0xc00091c0b0) Go away received\nI0209 14:43:18.231770    2274 log.go:172] (0xc00091c0b0) (0xc00090a0a0) Stream removed, broadcasting: 1\nI0209 14:43:18.231787    2274 log.go:172] (0xc00091c0b0) (0xc000964000) Stream removed, broadcasting: 3\nI0209 14:43:18.231798    2274 log.go:172] (0xc00091c0b0) (0xc000650280) Stream removed, broadcasting: 5\n"
Feb  9 14:43:18.243: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  9 14:43:18.243: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  9 14:43:18.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3527 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  9 14:43:18.744: INFO: stderr: "I0209 14:43:18.445147    2296 log.go:172] (0xc00095a370) (0xc0009d6640) Create stream\nI0209 14:43:18.445762    2296 log.go:172] (0xc00095a370) (0xc0009d6640) Stream added, broadcasting: 1\nI0209 14:43:18.460430    2296 log.go:172] (0xc00095a370) Reply frame received for 1\nI0209 14:43:18.460824    2296 log.go:172] (0xc00095a370) (0xc0008b4000) Create stream\nI0209 14:43:18.460979    2296 log.go:172] (0xc00095a370) (0xc0008b4000) Stream added, broadcasting: 3\nI0209 14:43:18.467633    2296 log.go:172] (0xc00095a370) Reply frame received for 3\nI0209 14:43:18.468250    2296 log.go:172] (0xc00095a370) (0xc0009d66e0) Create stream\nI0209 14:43:18.468398    2296 log.go:172] (0xc00095a370) (0xc0009d66e0) Stream added, broadcasting: 5\nI0209 14:43:18.480452    2296 log.go:172] (0xc00095a370) Reply frame received for 5\nI0209 14:43:18.628797    2296 log.go:172] (0xc00095a370) Data frame received for 5\nI0209 14:43:18.628902    2296 log.go:172] (0xc0009d66e0) (5) Data frame handling\nI0209 14:43:18.628946    2296 log.go:172] (0xc0009d66e0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0209 14:43:18.631092    2296 log.go:172] (0xc00095a370) Data frame received for 3\nI0209 14:43:18.631126    2296 log.go:172] (0xc0008b4000) (3) Data frame handling\nI0209 14:43:18.631138    2296 log.go:172] (0xc0008b4000) (3) Data frame sent\nI0209 14:43:18.632542    2296 log.go:172] (0xc00095a370) Data frame received for 5\nI0209 14:43:18.632674    2296 log.go:172] (0xc0009d66e0) (5) Data frame handling\nI0209 14:43:18.632696    2296 log.go:172] (0xc0009d66e0) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0209 14:43:18.736500    2296 log.go:172] (0xc00095a370) (0xc0008b4000) Stream removed, broadcasting: 3\nI0209 14:43:18.736658    2296 log.go:172] (0xc00095a370) Data frame received for 1\nI0209 14:43:18.736714    2296 log.go:172] (0xc0009d6640) (1) Data frame handling\nI0209 14:43:18.736738    2296 log.go:172] (0xc0009d6640) (1) Data frame sent\nI0209 14:43:18.736997    2296 log.go:172] (0xc00095a370) (0xc0009d66e0) Stream removed, broadcasting: 5\nI0209 14:43:18.737240    2296 log.go:172] (0xc00095a370) (0xc0009d6640) Stream removed, broadcasting: 1\nI0209 14:43:18.737293    2296 log.go:172] (0xc00095a370) Go away received\nI0209 14:43:18.738046    2296 log.go:172] (0xc00095a370) (0xc0009d6640) Stream removed, broadcasting: 1\nI0209 14:43:18.738081    2296 log.go:172] (0xc00095a370) (0xc0008b4000) Stream removed, broadcasting: 3\nI0209 14:43:18.738092    2296 log.go:172] (0xc00095a370) (0xc0009d66e0) Stream removed, broadcasting: 5\n"
Feb  9 14:43:18.745: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  9 14:43:18.745: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  9 14:43:18.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3527 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  9 14:43:19.229: INFO: stderr: "I0209 14:43:18.912750    2315 log.go:172] (0xc000a4e420) (0xc000682a00) Create stream\nI0209 14:43:18.912975    2315 log.go:172] (0xc000a4e420) (0xc000682a00) Stream added, broadcasting: 1\nI0209 14:43:18.947309    2315 log.go:172] (0xc000a4e420) Reply frame received for 1\nI0209 14:43:18.947549    2315 log.go:172] (0xc000a4e420) (0xc0006821e0) Create stream\nI0209 14:43:18.947581    2315 log.go:172] (0xc000a4e420) (0xc0006821e0) Stream added, broadcasting: 3\nI0209 14:43:18.954136    2315 log.go:172] (0xc000a4e420) Reply frame received for 3\nI0209 14:43:18.954384    2315 log.go:172] (0xc000a4e420) (0xc000014000) Create stream\nI0209 14:43:18.954425    2315 log.go:172] (0xc000a4e420) (0xc000014000) Stream added, broadcasting: 5\nI0209 14:43:18.959350    2315 log.go:172] (0xc000a4e420) Reply frame received for 5\nI0209 14:43:19.097703    2315 log.go:172] (0xc000a4e420) Data frame received for 3\nI0209 14:43:19.097778    2315 log.go:172] (0xc0006821e0) (3) Data frame handling\nI0209 14:43:19.097793    2315 log.go:172] (0xc0006821e0) (3) Data frame sent\nI0209 14:43:19.097858    2315 log.go:172] (0xc000a4e420) Data frame received for 5\nI0209 14:43:19.097897    2315 log.go:172] (0xc000014000) (5) Data frame handling\nI0209 14:43:19.097921    2315 log.go:172] (0xc000014000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0209 14:43:19.218785    2315 log.go:172] (0xc000a4e420) Data frame received for 1\nI0209 14:43:19.218874    2315 log.go:172] (0xc000682a00) (1) Data frame handling\nI0209 14:43:19.218898    2315 log.go:172] (0xc000682a00) (1) Data frame sent\nI0209 14:43:19.218931    2315 log.go:172] (0xc000a4e420) (0xc0006821e0) Stream removed, broadcasting: 3\nI0209 14:43:19.219040    2315 log.go:172] (0xc000a4e420) (0xc000014000) Stream removed, broadcasting: 5\nI0209 14:43:19.219112    2315 log.go:172] (0xc000a4e420) (0xc000682a00) Stream removed, broadcasting: 1\nI0209 14:43:19.219134    2315 log.go:172] (0xc000a4e420) Go away received\nI0209 14:43:19.220377    2315 log.go:172] (0xc000a4e420) (0xc000682a00) Stream removed, broadcasting: 1\nI0209 14:43:19.220404    2315 log.go:172] (0xc000a4e420) (0xc0006821e0) Stream removed, broadcasting: 3\nI0209 14:43:19.220422    2315 log.go:172] (0xc000a4e420) (0xc000014000) Stream removed, broadcasting: 5\n"
Feb  9 14:43:19.229: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Feb  9 14:43:19.229: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Feb  9 14:43:19.238: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  9 14:43:19.238: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  9 14:43:19.238: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=false
Feb  9 14:43:29.248: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb  9 14:43:29.248: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb  9 14:43:29.248: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Feb  9 14:43:29.254: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3527 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  9 14:43:29.866: INFO: stderr: "I0209 14:43:29.513992    2336 log.go:172] (0xc00069a420) (0xc00066a640) Create stream\nI0209 14:43:29.514096    2336 log.go:172] (0xc00069a420) (0xc00066a640) Stream added, broadcasting: 1\nI0209 14:43:29.521021    2336 log.go:172] (0xc00069a420) Reply frame received for 1\nI0209 14:43:29.521096    2336 log.go:172] (0xc00069a420) (0xc00066a6e0) Create stream\nI0209 14:43:29.521126    2336 log.go:172] (0xc00069a420) (0xc00066a6e0) Stream added, broadcasting: 3\nI0209 14:43:29.523853    2336 log.go:172] (0xc00069a420) Reply frame received for 3\nI0209 14:43:29.523906    2336 log.go:172] (0xc00069a420) (0xc00066a780) Create stream\nI0209 14:43:29.523915    2336 log.go:172] (0xc00069a420) (0xc00066a780) Stream added, broadcasting: 5\nI0209 14:43:29.527942    2336 log.go:172] (0xc00069a420) Reply frame received for 5\nI0209 14:43:29.660368    2336 log.go:172] (0xc00069a420) Data frame received for 5\nI0209 14:43:29.660981    2336 log.go:172] (0xc00066a780) (5) Data frame handling\nI0209 14:43:29.661031    2336 log.go:172] (0xc00066a780) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0209 14:43:29.661170    2336 log.go:172] (0xc00069a420) Data frame received for 3\nI0209 14:43:29.661204    2336 log.go:172] (0xc00066a6e0) (3) Data frame handling\nI0209 14:43:29.661219    2336 log.go:172] (0xc00066a6e0) (3) Data frame sent\nI0209 14:43:29.849786    2336 log.go:172] (0xc00069a420) (0xc00066a6e0) Stream removed, broadcasting: 3\nI0209 14:43:29.850464    2336 log.go:172] (0xc00069a420) Data frame received for 1\nI0209 14:43:29.850816    2336 log.go:172] (0xc00069a420) (0xc00066a780) Stream removed, broadcasting: 5\nI0209 14:43:29.850938    2336 log.go:172] (0xc00066a640) (1) Data frame handling\nI0209 14:43:29.850971    2336 log.go:172] (0xc00066a640) (1) Data frame sent\nI0209 14:43:29.850998    2336 log.go:172] (0xc00069a420) (0xc00066a640) Stream removed, broadcasting: 1\nI0209 14:43:29.851031    2336 log.go:172] (0xc00069a420) Go away received\nI0209 14:43:29.852021    2336 log.go:172] (0xc00069a420) (0xc00066a640) Stream removed, broadcasting: 1\nI0209 14:43:29.852091    2336 log.go:172] (0xc00069a420) (0xc00066a6e0) Stream removed, broadcasting: 3\nI0209 14:43:29.852104    2336 log.go:172] (0xc00069a420) (0xc00066a780) Stream removed, broadcasting: 5\n"
Feb  9 14:43:29.867: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  9 14:43:29.867: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  9 14:43:29.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3527 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  9 14:43:30.620: INFO: stderr: "I0209 14:43:30.273615    2357 log.go:172] (0xc00012a0b0) (0xc0008b5540) Create stream\nI0209 14:43:30.273776    2357 log.go:172] (0xc00012a0b0) (0xc0008b5540) Stream added, broadcasting: 1\nI0209 14:43:30.277063    2357 log.go:172] (0xc00012a0b0) Reply frame received for 1\nI0209 14:43:30.277104    2357 log.go:172] (0xc00012a0b0) (0xc0005c81e0) Create stream\nI0209 14:43:30.277124    2357 log.go:172] (0xc00012a0b0) (0xc0005c81e0) Stream added, broadcasting: 3\nI0209 14:43:30.278659    2357 log.go:172] (0xc00012a0b0) Reply frame received for 3\nI0209 14:43:30.278685    2357 log.go:172] (0xc00012a0b0) (0xc000572320) Create stream\nI0209 14:43:30.278698    2357 log.go:172] (0xc00012a0b0) (0xc000572320) Stream added, broadcasting: 5\nI0209 14:43:30.280944    2357 log.go:172] (0xc00012a0b0) Reply frame received for 5\nI0209 14:43:30.499038    2357 log.go:172] (0xc00012a0b0) Data frame received for 5\nI0209 14:43:30.499144    2357 log.go:172] (0xc000572320) (5) Data frame handling\nI0209 14:43:30.499200    2357 log.go:172] (0xc000572320) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0209 14:43:30.545743    2357 log.go:172] (0xc00012a0b0) Data frame received for 3\nI0209 14:43:30.545825    2357 log.go:172] (0xc0005c81e0) (3) Data frame handling\nI0209 14:43:30.545853    2357 log.go:172] (0xc0005c81e0) (3) Data frame sent\nI0209 14:43:30.612699    2357 log.go:172] (0xc00012a0b0) Data frame received for 1\nI0209 14:43:30.612742    2357 log.go:172] (0xc0008b5540) (1) Data frame handling\nI0209 14:43:30.612768    2357 log.go:172] (0xc0008b5540) (1) Data frame sent\nI0209 14:43:30.612794    2357 log.go:172] (0xc00012a0b0) (0xc0008b5540) Stream removed, broadcasting: 1\nI0209 14:43:30.613367    2357 log.go:172] (0xc00012a0b0) (0xc0005c81e0) Stream removed, broadcasting: 3\nI0209 14:43:30.613386    2357 log.go:172] (0xc00012a0b0) (0xc000572320) Stream removed, broadcasting: 5\nI0209 14:43:30.613409    2357 log.go:172] (0xc00012a0b0) (0xc0008b5540) Stream removed, broadcasting: 1\nI0209 14:43:30.613416    2357 log.go:172] (0xc00012a0b0) (0xc0005c81e0) Stream removed, broadcasting: 3\nI0209 14:43:30.613426    2357 log.go:172] (0xc00012a0b0) (0xc000572320) Stream removed, broadcasting: 5\n"
Feb  9 14:43:30.620: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  9 14:43:30.620: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  9 14:43:30.620: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3527 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Feb  9 14:43:31.065: INFO: stderr: "I0209 14:43:30.779983    2376 log.go:172] (0xc0008aa420) (0xc00075c640) Create stream\nI0209 14:43:30.780141    2376 log.go:172] (0xc0008aa420) (0xc00075c640) Stream added, broadcasting: 1\nI0209 14:43:30.783660    2376 log.go:172] (0xc0008aa420) Reply frame received for 1\nI0209 14:43:30.783699    2376 log.go:172] (0xc0008aa420) (0xc0008a8000) Create stream\nI0209 14:43:30.783710    2376 log.go:172] (0xc0008aa420) (0xc0008a8000) Stream added, broadcasting: 3\nI0209 14:43:30.784560    2376 log.go:172] (0xc0008aa420) Reply frame received for 3\nI0209 14:43:30.784583    2376 log.go:172] (0xc0008aa420) (0xc000642280) Create stream\nI0209 14:43:30.784591    2376 log.go:172] (0xc0008aa420) (0xc000642280) Stream added, broadcasting: 5\nI0209 14:43:30.785410    2376 log.go:172] (0xc0008aa420) Reply frame received for 5\nI0209 14:43:30.915748    2376 log.go:172] (0xc0008aa420) Data frame received for 5\nI0209 14:43:30.915848    2376 log.go:172] (0xc000642280) (5) Data frame handling\nI0209 14:43:30.915892    2376 log.go:172] (0xc000642280) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0209 14:43:30.960936    2376 log.go:172] (0xc0008aa420) Data frame received for 3\nI0209 14:43:30.961012    2376 log.go:172] (0xc0008a8000) (3) Data frame handling\nI0209 14:43:30.961036    2376 log.go:172] (0xc0008a8000) (3) Data frame sent\nI0209 14:43:31.053080    2376 log.go:172] (0xc0008aa420) (0xc0008a8000) Stream removed, broadcasting: 3\nI0209 14:43:31.053293    2376 log.go:172] (0xc0008aa420) Data frame received for 1\nI0209 14:43:31.053312    2376 log.go:172] (0xc00075c640) (1) Data frame handling\nI0209 14:43:31.053339    2376 log.go:172] (0xc00075c640) (1) Data frame sent\nI0209 14:43:31.053353    2376 log.go:172] (0xc0008aa420) (0xc00075c640) Stream removed, broadcasting: 1\nI0209 14:43:31.053902    2376 log.go:172] (0xc0008aa420) (0xc000642280) Stream removed, broadcasting: 5\nI0209 14:43:31.054167    2376 log.go:172] (0xc0008aa420) Go away received\nI0209 14:43:31.054232    2376 log.go:172] (0xc0008aa420) (0xc00075c640) Stream removed, broadcasting: 1\nI0209 14:43:31.054268    2376 log.go:172] (0xc0008aa420) (0xc0008a8000) Stream removed, broadcasting: 3\nI0209 14:43:31.054315    2376 log.go:172] (0xc0008aa420) (0xc000642280) Stream removed, broadcasting: 5\n"
Feb  9 14:43:31.065: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Feb  9 14:43:31.065: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Feb  9 14:43:31.065: INFO: Waiting for statefulset status.replicas updated to 0
Feb  9 14:43:31.071: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Feb  9 14:43:41.082: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb  9 14:43:41.082: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Feb  9 14:43:41.082: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Feb  9 14:43:41.101: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb  9 14:43:41.101: INFO: ss-0  iruya-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:42:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:42:36 +0000 UTC  }]
Feb  9 14:43:41.101: INFO: ss-1  iruya-server-sfge57q7djm7  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:06 +0000 UTC  }]
Feb  9 14:43:41.101: INFO: ss-2  iruya-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:06 +0000 UTC  }]
Feb  9 14:43:41.101: INFO: 
Feb  9 14:43:41.101: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  9 14:43:42.109: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb  9 14:43:42.109: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:42:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:42:36 +0000 UTC  }]
Feb  9 14:43:42.109: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:06 +0000 UTC  }]
Feb  9 14:43:42.109: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:06 +0000 UTC  }]
Feb  9 14:43:42.109: INFO: 
Feb  9 14:43:42.109: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  9 14:43:43.117: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb  9 14:43:43.117: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:42:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:42:36 +0000 UTC  }]
Feb  9 14:43:43.117: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:06 +0000 UTC  }]
Feb  9 14:43:43.117: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:06 +0000 UTC  }]
Feb  9 14:43:43.117: INFO: 
Feb  9 14:43:43.117: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  9 14:43:44.135: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb  9 14:43:44.135: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:42:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:42:36 +0000 UTC  }]
Feb  9 14:43:44.135: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:06 +0000 UTC  }]
Feb  9 14:43:44.136: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:06 +0000 UTC  }]
Feb  9 14:43:44.136: INFO: 
Feb  9 14:43:44.136: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  9 14:43:45.516: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb  9 14:43:45.516: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:42:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:42:36 +0000 UTC  }]
Feb  9 14:43:45.517: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:06 +0000 UTC  }]
Feb  9 14:43:45.517: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:06 +0000 UTC  }]
Feb  9 14:43:45.517: INFO: 
Feb  9 14:43:45.517: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  9 14:43:46.531: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb  9 14:43:46.531: INFO: ss-0  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:42:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:42:36 +0000 UTC  }]
Feb  9 14:43:46.531: INFO: ss-1  iruya-server-sfge57q7djm7  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:06 +0000 UTC  }]
Feb  9 14:43:46.531: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:06 +0000 UTC  }]
Feb  9 14:43:46.531: INFO: 
Feb  9 14:43:46.531: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  9 14:43:47.540: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb  9 14:43:47.540: INFO: ss-0  iruya-node                 Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:42:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:42:36 +0000 UTC  }]
Feb  9 14:43:47.540: INFO: ss-1  iruya-server-sfge57q7djm7  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:06 +0000 UTC  }]
Feb  9 14:43:47.540: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:06 +0000 UTC  }]
Feb  9 14:43:47.540: INFO: 
Feb  9 14:43:47.540: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  9 14:43:48.553: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb  9 14:43:48.553: INFO: ss-0  iruya-node                 Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:42:36 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:42:36 +0000 UTC  }]
Feb  9 14:43:48.553: INFO: ss-1  iruya-server-sfge57q7djm7  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:06 +0000 UTC  }]
Feb  9 14:43:48.553: INFO: ss-2  iruya-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:06 +0000 UTC  }]
Feb  9 14:43:48.553: INFO: 
Feb  9 14:43:48.553: INFO: StatefulSet ss has not reached scale 0, at 3
Feb  9 14:43:49.575: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb  9 14:43:49.575: INFO: ss-1  iruya-server-sfge57q7djm7  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:06 +0000 UTC  }]
Feb  9 14:43:49.576: INFO: ss-2  iruya-node                 Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:06 +0000 UTC  }]
Feb  9 14:43:49.576: INFO: 
Feb  9 14:43:49.576: INFO: StatefulSet ss has not reached scale 0, at 2
Feb  9 14:43:50.585: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Feb  9 14:43:50.585: INFO: ss-1  iruya-server-sfge57q7djm7  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:06 +0000 UTC  }]
Feb  9 14:43:50.585: INFO: ss-2  iruya-node                 Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:31 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 14:43:06 +0000 UTC  }]
Feb  9 14:43:50.585: INFO: 
Feb  9 14:43:50.585: INFO: StatefulSet ss has not reached scale 0, at 2
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-3527
Feb  9 14:43:51.596: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3527 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  9 14:43:51.962: INFO: rc: 1
Feb  9 14:43:51.963: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3527 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc0027d5440 exit status 1   true [0xc000bf76d8 0xc000bf7848 0xc000bf7918] [0xc000bf76d8 0xc000bf7848 0xc000bf7918] [0xc000bf7808 0xc000bf78d8] [0xba6c50 0xba6c50] 0xc002281800 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1
Feb  9 14:44:01.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3527 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  9 14:44:02.092: INFO: rc: 1
Feb  9 14:44:02.092: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3527 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc002be4ed0 exit status 1   true [0xc002cee0c8 0xc002cee0e0 0xc002cee0f8] [0xc002cee0c8 0xc002cee0e0 0xc002cee0f8] [0xc002cee0d8 0xc002cee0f0] [0xba6c50 0xba6c50] 0xc0033af200 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  9 14:44:12.093: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3527 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  9 14:44:12.221: INFO: rc: 1
Feb  9 14:44:12.221: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3527 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0027d5560 exit status 1   true [0xc000bf7948 0xc000bf7a00 0xc000bf7aa8] [0xc000bf7948 0xc000bf7a00 0xc000bf7aa8] [0xc000bf79b0 0xc000bf7a48] [0xba6c50 0xba6c50] 0xc002281d40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  9 14:44:22.222: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3527 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  9 14:44:22.379: INFO: rc: 1
Feb  9 14:44:22.379: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3527 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc002be4fc0 exit status 1   true [0xc002cee100 0xc002cee118 0xc002cee130] [0xc002cee100 0xc002cee118 0xc002cee130] [0xc002cee110 0xc002cee128] [0xba6c50 0xba6c50] 0xc0033af500 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  9 14:44:32.380: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3527 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  9 14:44:32.558: INFO: rc: 1
Feb  9 14:44:32.559: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3527 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc002be5080 exit status 1   true [0xc002cee138 0xc002cee150 0xc002cee170] [0xc002cee138 0xc002cee150 0xc002cee170] [0xc002cee148 0xc002cee160] [0xba6c50 0xba6c50] 0xc0033af800 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  9 14:44:42.560: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3527 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  9 14:44:42.688: INFO: rc: 1
Feb  9 14:44:42.688: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3527 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc001eef320 exit status 1   true [0xc0009574d0 0xc000957538 0xc000957570] [0xc0009574d0 0xc000957538 0xc000957570] [0xc000957520 0xc000957558] [0xba6c50 0xba6c50] 0xc00204b680 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  9 14:44:52.689: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3527 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  9 14:44:52.828: INFO: rc: 1
Feb  9 14:44:52.828: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3527 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc002952090 exit status 1   true [0xc000bf6000 0xc000bf6330 0xc000bf67c8] [0xc000bf6000 0xc000bf6330 0xc000bf67c8] [0xc000bf62a0 0xc000bf6740] [0xba6c50 0xba6c50] 0xc0027ec960 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  9 14:45:02.828: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3527 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  9 14:45:03.020: INFO: rc: 1
Feb  9 14:45:03.021: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3527 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc002cd6180 exit status 1   true [0xc002cee000 0xc002cee020 0xc002cee048] [0xc002cee000 0xc002cee020 0xc002cee048] [0xc002cee010 0xc002cee040] [0xba6c50 0xba6c50] 0xc002344300 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  9 14:45:13.022: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3527 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  9 14:45:13.220: INFO: rc: 1
Feb  9 14:45:13.220: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3527 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0029521e0 exit status 1   true [0xc000bf68e0 0xc000bf6b90 0xc000bf6f30] [0xc000bf68e0 0xc000bf6b90 0xc000bf6f30] [0xc000bf6ab0 0xc000bf6e30] [0xba6c50 0xba6c50] 0xc0027eccc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  9 14:45:23.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3527 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  9 14:45:23.362: INFO: rc: 1
Feb  9 14:45:23.363: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3527 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0025e20f0 exit status 1   true [0xc000956010 0xc0009561c0 0xc000956358] [0xc000956010 0xc0009561c0 0xc000956358] [0xc000956118 0xc0009562b8] [0xba6c50 0xba6c50] 0xc002280360 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  9 14:45:33.364: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3527 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  9 14:45:33.519: INFO: rc: 1
Feb  9 14:45:33.520: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3527 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0024c08a0 exit status 1   true [0xc000c46048 0xc000c460f0 0xc000c461d0] [0xc000c46048 0xc000c460f0 0xc000c461d0] [0xc000c460d8 0xc000c46138] [0xba6c50 0xba6c50] 0xc0033aec60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  9 14:45:43.520: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3527 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  9 14:45:43.680: INFO: rc: 1
Feb  9 14:45:43.681: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3527 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc002952330 exit status 1   true [0xc000bf6fb0 0xc000bf7148 0xc000bf71d8] [0xc000bf6fb0 0xc000bf7148 0xc000bf71d8] [0xc000bf7130 0xc000bf71b8] [0xba6c50 0xba6c50] 0xc0027ed140 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  9 14:45:53.682: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3527 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  9 14:45:53.850: INFO: rc: 1
Feb  9 14:45:53.851: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3527 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0025e21b0 exit status 1   true [0xc000956380 0xc000956490 0xc000956560] [0xc000956380 0xc000956490 0xc000956560] [0xc000956420 0xc000956530] [0xba6c50 0xba6c50] 0xc002280900 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  9 14:46:03.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3527 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  9 14:46:03.961: INFO: rc: 1
Feb  9 14:46:03.962: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3527 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0029523f0 exit status 1   true [0xc000bf7290 0xc000bf73d8 0xc000bf7440] [0xc000bf7290 0xc000bf73d8 0xc000bf7440] [0xc000bf7328 0xc000bf7428] [0xba6c50 0xba6c50] 0xc0027ed440 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  9 14:46:13.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3527 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  9 14:46:14.167: INFO: rc: 1
Feb  9 14:46:14.167: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3527 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0029524b0 exit status 1   true [0xc000bf7478 0xc000bf7520 0xc000bf7588] [0xc000bf7478 0xc000bf7520 0xc000bf7588] [0xc000bf74c8 0xc000bf7570] [0xba6c50 0xba6c50] 0xc0027ed740 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  9 14:46:24.168: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3527 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  9 14:46:24.247: INFO: rc: 1
Feb  9 14:46:24.247: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3527 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0029525a0 exit status 1   true [0xc000bf75e8 0xc000bf76c0 0xc000bf7808] [0xc000bf75e8 0xc000bf76c0 0xc000bf7808] [0xc000bf7698 0xc000bf7760] [0xba6c50 0xba6c50] 0xc0027eda40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  9 14:46:34.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3527 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  9 14:46:34.378: INFO: rc: 1
Feb  9 14:46:34.378: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3527 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0025e22a0 exit status 1   true [0xc0009565b8 0xc000956678 0xc000956bc0] [0xc0009565b8 0xc000956678 0xc000956bc0] [0xc000956638 0xc000956b50] [0xba6c50 0xba6c50] 0xc002280d20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  9 14:46:44.379: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3527 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  9 14:46:44.487: INFO: rc: 1
Feb  9 14:46:44.488: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3527 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0025e2360 exit status 1   true [0xc000956c58 0xc000956cf0 0xc000956d30] [0xc000956c58 0xc000956cf0 0xc000956d30] [0xc000956ce0 0xc000956d10] [0xba6c50 0xba6c50] 0xc002281500 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  9 14:46:54.489: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3527 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  9 14:46:54.629: INFO: rc: 1
Feb  9 14:46:54.630: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3527 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0024c0870 exit status 1   true [0xc000c46048 0xc000c460f0 0xc000c461d0] [0xc000c46048 0xc000c460f0 0xc000c461d0] [0xc000c460d8 0xc000c46138] [0xba6c50 0xba6c50] 0xc0033aec60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  9 14:47:04.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3527 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  9 14:47:04.770: INFO: rc: 1
Feb  9 14:47:04.771: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3527 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc002cd61b0 exit status 1   true [0xc000956010 0xc0009561c0 0xc000956358] [0xc000956010 0xc0009561c0 0xc000956358] [0xc000956118 0xc0009562b8] [0xba6c50 0xba6c50] 0xc002280360 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  9 14:47:14.771: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3527 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  9 14:47:14.907: INFO: rc: 1
Feb  9 14:47:14.907: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3527 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc002cd6270 exit status 1   true [0xc000956380 0xc000956490 0xc000956560] [0xc000956380 0xc000956490 0xc000956560] [0xc000956420 0xc000956530] [0xba6c50 0xba6c50] 0xc002280900 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  9 14:47:24.907: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3527 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  9 14:47:25.154: INFO: rc: 1
Feb  9 14:47:25.154: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3527 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0024c0990 exit status 1   true [0xc000c46228 0xc000c46310 0xc000c46360] [0xc000c46228 0xc000c46310 0xc000c46360] [0xc000c462e0 0xc000c46350] [0xba6c50 0xba6c50] 0xc0033aef60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  9 14:47:35.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3527 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  9 14:47:35.357: INFO: rc: 1
Feb  9 14:47:35.357: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3527 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0024c0a80 exit status 1   true [0xc000c463c8 0xc000c46508 0xc000c46550] [0xc000c463c8 0xc000c46508 0xc000c46550] [0xc000c46428 0xc000c46548] [0xba6c50 0xba6c50] 0xc0033af260 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  9 14:47:45.358: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3527 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  9 14:47:45.542: INFO: rc: 1
Feb  9 14:47:45.542: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3527 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc002cd6360 exit status 1   true [0xc0009565b8 0xc000956678 0xc000956bc0] [0xc0009565b8 0xc000956678 0xc000956bc0] [0xc000956638 0xc000956b50] [0xba6c50 0xba6c50] 0xc002280d20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  9 14:47:55.543: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3527 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  9 14:47:55.692: INFO: rc: 1
Feb  9 14:47:55.692: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3527 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0024c0bd0 exit status 1   true [0xc000c46568 0xc000c46748 0xc000c46848] [0xc000c46568 0xc000c46748 0xc000c46848] [0xc000c46718 0xc000c46830] [0xba6c50 0xba6c50] 0xc0033af560 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  9 14:48:05.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3527 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  9 14:48:05.922: INFO: rc: 1
Feb  9 14:48:05.923: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3527 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc002cd6420 exit status 1   true [0xc000956c58 0xc000956cf0 0xc000956d30] [0xc000956c58 0xc000956cf0 0xc000956d30] [0xc000956ce0 0xc000956d10] [0xba6c50 0xba6c50] 0xc002281500 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  9 14:48:15.923: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3527 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  9 14:48:16.112: INFO: rc: 1
Feb  9 14:48:16.112: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3527 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc002cd6510 exit status 1   true [0xc000956d78 0xc000956f08 0xc000956ff0] [0xc000956d78 0xc000956f08 0xc000956ff0] [0xc000956e58 0xc000956fa8] [0xba6c50 0xba6c50] 0xc002281bc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  9 14:48:26.113: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3527 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  9 14:48:26.400: INFO: rc: 1
Feb  9 14:48:26.400: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3527 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0025e20c0 exit status 1   true [0xc002cee000 0xc002cee020 0xc002cee048] [0xc002cee000 0xc002cee020 0xc002cee048] [0xc002cee010 0xc002cee040] [0xba6c50 0xba6c50] 0xc002344300 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  9 14:48:36.401: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3527 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  9 14:48:36.573: INFO: rc: 1
Feb  9 14:48:36.573: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3527 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc002cd6630 exit status 1   true [0xc000957000 0xc000957028 0xc000957098] [0xc000957000 0xc000957028 0xc000957098] [0xc000957018 0xc000957060] [0xba6c50 0xba6c50] 0xc0027ec720 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  9 14:48:46.574: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3527 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  9 14:48:46.703: INFO: rc: 1
Feb  9 14:48:46.704: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3527 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-1" not found
 []  0xc0024c0cf0 exit status 1   true [0xc000c46860 0xc000c46958 0xc000c46a20] [0xc000c46860 0xc000c46958 0xc000c46a20] [0xc000c46930 0xc000c46990] [0xba6c50 0xba6c50] 0xc0033af860 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-1" not found

error:
exit status 1
Feb  9 14:48:56.704: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3527 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Feb  9 14:48:56.878: INFO: rc: 1
Feb  9 14:48:56.879: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: 
Feb  9 14:48:56.879: INFO: Scaling statefulset ss to 0
Feb  9 14:48:56.913: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Feb  9 14:48:56.919: INFO: Deleting all statefulset in ns statefulset-3527
Feb  9 14:48:56.927: INFO: Scaling statefulset ss to 0
Feb  9 14:48:56.950: INFO: Waiting for statefulset status.replicas updated to 0
Feb  9 14:48:56.954: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 14:48:56.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-3527" for this suite.
Feb  9 14:49:03.016: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 14:49:03.139: INFO: namespace statefulset-3527 deletion completed in 6.155383516s

• [SLOW TEST:387.096 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 14:49:03.140: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Feb  9 14:49:09.530: INFO: 0 pods remaining
Feb  9 14:49:09.531: INFO: 0 pods has nil DeletionTimestamp
Feb  9 14:49:09.531: INFO: 
STEP: Gathering metrics
W0209 14:49:10.287066       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb  9 14:49:10.287: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 14:49:10.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-817" for this suite.
Feb  9 14:49:20.564: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 14:49:20.677: INFO: namespace gc-817 deletion completed in 10.385941329s

• [SLOW TEST:17.537 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 14:49:20.677: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Feb  9 14:49:20.828: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb  9 14:49:20.840: INFO: Waiting for terminating namespaces to be deleted...
Feb  9 14:49:20.844: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Feb  9 14:49:20.942: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Feb  9 14:49:20.942: INFO: 	Container weave ready: true, restart count 0
Feb  9 14:49:20.942: INFO: 	Container weave-npc ready: true, restart count 0
Feb  9 14:49:20.942: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Feb  9 14:49:20.942: INFO: 	Container kube-proxy ready: true, restart count 0
Feb  9 14:49:20.942: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Feb  9 14:49:20.983: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Feb  9 14:49:20.983: INFO: 	Container coredns ready: true, restart count 0
Feb  9 14:49:20.983: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Feb  9 14:49:20.983: INFO: 	Container etcd ready: true, restart count 0
Feb  9 14:49:20.983: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Feb  9 14:49:20.983: INFO: 	Container weave ready: true, restart count 0
Feb  9 14:49:20.983: INFO: 	Container weave-npc ready: true, restart count 0
Feb  9 14:49:20.983: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Feb  9 14:49:20.983: INFO: 	Container kube-controller-manager ready: true, restart count 20
Feb  9 14:49:20.983: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Feb  9 14:49:20.983: INFO: 	Container kube-proxy ready: true, restart count 0
Feb  9 14:49:20.983: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Feb  9 14:49:20.983: INFO: 	Container kube-apiserver ready: true, restart count 0
Feb  9 14:49:20.983: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Feb  9 14:49:20.983: INFO: 	Container kube-scheduler ready: true, restart count 13
Feb  9 14:49:20.983: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Feb  9 14:49:20.983: INFO: 	Container coredns ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-2222e54c-c198-4994-ad7d-dc3a490ab62f 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-2222e54c-c198-4994-ad7d-dc3a490ab62f off the node iruya-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-2222e54c-c198-4994-ad7d-dc3a490ab62f
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 14:49:41.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-6288" for this suite.
Feb  9 14:50:01.459: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 14:50:01.573: INFO: namespace sched-pred-6288 deletion completed in 20.221323238s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:40.896 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 14:50:01.574: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Feb  9 14:50:13.795: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-36fe5c8b-aca6-4c10-8c47-db04d68e01d7 -c busybox-main-container --namespace=emptydir-7290 -- cat /usr/share/volumeshare/shareddata.txt'
Feb  9 14:50:16.791: INFO: stderr: "I0209 14:50:16.270709    2961 log.go:172] (0xc0008e4210) (0xc0008de0a0) Create stream\nI0209 14:50:16.270753    2961 log.go:172] (0xc0008e4210) (0xc0008de0a0) Stream added, broadcasting: 1\nI0209 14:50:16.281327    2961 log.go:172] (0xc0008e4210) Reply frame received for 1\nI0209 14:50:16.281424    2961 log.go:172] (0xc0008e4210) (0xc000578280) Create stream\nI0209 14:50:16.281448    2961 log.go:172] (0xc0008e4210) (0xc000578280) Stream added, broadcasting: 3\nI0209 14:50:16.285139    2961 log.go:172] (0xc0008e4210) Reply frame received for 3\nI0209 14:50:16.285174    2961 log.go:172] (0xc0008e4210) (0xc0007320a0) Create stream\nI0209 14:50:16.285186    2961 log.go:172] (0xc0008e4210) (0xc0007320a0) Stream added, broadcasting: 5\nI0209 14:50:16.287899    2961 log.go:172] (0xc0008e4210) Reply frame received for 5\nI0209 14:50:16.449276    2961 log.go:172] (0xc0008e4210) Data frame received for 3\nI0209 14:50:16.449324    2961 log.go:172] (0xc000578280) (3) Data frame handling\nI0209 14:50:16.449344    2961 log.go:172] (0xc000578280) (3) Data frame sent\nI0209 14:50:16.779178    2961 log.go:172] (0xc0008e4210) (0xc000578280) Stream removed, broadcasting: 3\nI0209 14:50:16.779380    2961 log.go:172] (0xc0008e4210) Data frame received for 1\nI0209 14:50:16.779396    2961 log.go:172] (0xc0008de0a0) (1) Data frame handling\nI0209 14:50:16.779411    2961 log.go:172] (0xc0008de0a0) (1) Data frame sent\nI0209 14:50:16.779421    2961 log.go:172] (0xc0008e4210) (0xc0008de0a0) Stream removed, broadcasting: 1\nI0209 14:50:16.779911    2961 log.go:172] (0xc0008e4210) (0xc0007320a0) Stream removed, broadcasting: 5\nI0209 14:50:16.780087    2961 log.go:172] (0xc0008e4210) Go away received\nI0209 14:50:16.780117    2961 log.go:172] (0xc0008e4210) (0xc0008de0a0) Stream removed, broadcasting: 1\nI0209 14:50:16.780129    2961 log.go:172] (0xc0008e4210) (0xc000578280) Stream removed, broadcasting: 3\nI0209 14:50:16.780139    2961 log.go:172] (0xc0008e4210) (0xc0007320a0) Stream removed, broadcasting: 5\n"
Feb  9 14:50:16.792: INFO: stdout: "Hello from the busy-box sub-container\n"
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 14:50:16.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7290" for this suite.
Feb  9 14:50:22.846: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 14:50:23.053: INFO: namespace emptydir-7290 deletion completed in 6.244926867s

• [SLOW TEST:21.479 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 14:50:23.054: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-771983d1-1914-46ce-9470-629cb5c4ebf8
STEP: Creating secret with name s-test-opt-upd-96c2e411-62b8-4891-989e-7f0fe20d77d6
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-771983d1-1914-46ce-9470-629cb5c4ebf8
STEP: Updating secret s-test-opt-upd-96c2e411-62b8-4891-989e-7f0fe20d77d6
STEP: Creating secret with name s-test-opt-create-48576e5b-b483-4266-a84c-bb80ef78bcc8
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 14:51:43.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4037" for this suite.
Feb  9 14:52:23.353: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 14:52:23.490: INFO: namespace projected-4037 deletion completed in 40.166822826s

• [SLOW TEST:120.436 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 14:52:23.490: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod test-webserver-d8ff83a8-eb83-46f2-a193-609572c7b8bc in namespace container-probe-436
Feb  9 14:52:35.650: INFO: Started pod test-webserver-d8ff83a8-eb83-46f2-a193-609572c7b8bc in namespace container-probe-436
STEP: checking the pod's current state and verifying that restartCount is present
Feb  9 14:52:35.654: INFO: Initial restart count of pod test-webserver-d8ff83a8-eb83-46f2-a193-609572c7b8bc is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 14:56:36.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-436" for this suite.
Feb  9 14:56:42.278: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 14:56:42.386: INFO: namespace container-probe-436 deletion completed in 6.144492745s

• [SLOW TEST:258.896 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 14:56:42.386: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  9 14:56:42.553: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Feb  9 14:56:42.588: INFO: Number of nodes with available pods: 0
Feb  9 14:56:42.588: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Feb  9 14:56:42.732: INFO: Number of nodes with available pods: 0
Feb  9 14:56:42.732: INFO: Node iruya-node is running more than one daemon pod
Feb  9 14:56:43.744: INFO: Number of nodes with available pods: 0
Feb  9 14:56:43.744: INFO: Node iruya-node is running more than one daemon pod
Feb  9 14:56:44.741: INFO: Number of nodes with available pods: 0
Feb  9 14:56:44.741: INFO: Node iruya-node is running more than one daemon pod
Feb  9 14:56:45.753: INFO: Number of nodes with available pods: 0
Feb  9 14:56:45.753: INFO: Node iruya-node is running more than one daemon pod
Feb  9 14:56:46.744: INFO: Number of nodes with available pods: 0
Feb  9 14:56:46.744: INFO: Node iruya-node is running more than one daemon pod
Feb  9 14:56:47.818: INFO: Number of nodes with available pods: 0
Feb  9 14:56:47.818: INFO: Node iruya-node is running more than one daemon pod
Feb  9 14:56:48.741: INFO: Number of nodes with available pods: 0
Feb  9 14:56:48.741: INFO: Node iruya-node is running more than one daemon pod
Feb  9 14:56:49.770: INFO: Number of nodes with available pods: 0
Feb  9 14:56:49.770: INFO: Node iruya-node is running more than one daemon pod
Feb  9 14:56:50.743: INFO: Number of nodes with available pods: 0
Feb  9 14:56:50.743: INFO: Node iruya-node is running more than one daemon pod
Feb  9 14:56:51.761: INFO: Number of nodes with available pods: 1
Feb  9 14:56:51.761: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Feb  9 14:56:51.859: INFO: Number of nodes with available pods: 1
Feb  9 14:56:51.860: INFO: Number of running nodes: 0, number of available pods: 1
Feb  9 14:56:52.873: INFO: Number of nodes with available pods: 0
Feb  9 14:56:52.873: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Feb  9 14:56:52.918: INFO: Number of nodes with available pods: 0
Feb  9 14:56:52.918: INFO: Node iruya-node is running more than one daemon pod
Feb  9 14:56:53.927: INFO: Number of nodes with available pods: 0
Feb  9 14:56:53.927: INFO: Node iruya-node is running more than one daemon pod
Feb  9 14:56:54.926: INFO: Number of nodes with available pods: 0
Feb  9 14:56:54.927: INFO: Node iruya-node is running more than one daemon pod
Feb  9 14:56:55.930: INFO: Number of nodes with available pods: 0
Feb  9 14:56:55.931: INFO: Node iruya-node is running more than one daemon pod
Feb  9 14:56:56.928: INFO: Number of nodes with available pods: 0
Feb  9 14:56:56.928: INFO: Node iruya-node is running more than one daemon pod
Feb  9 14:56:57.928: INFO: Number of nodes with available pods: 0
Feb  9 14:56:57.928: INFO: Node iruya-node is running more than one daemon pod
Feb  9 14:56:58.934: INFO: Number of nodes with available pods: 0
Feb  9 14:56:58.934: INFO: Node iruya-node is running more than one daemon pod
Feb  9 14:56:59.936: INFO: Number of nodes with available pods: 0
Feb  9 14:56:59.936: INFO: Node iruya-node is running more than one daemon pod
Feb  9 14:57:00.939: INFO: Number of nodes with available pods: 0
Feb  9 14:57:00.939: INFO: Node iruya-node is running more than one daemon pod
Feb  9 14:57:01.928: INFO: Number of nodes with available pods: 0
Feb  9 14:57:01.928: INFO: Node iruya-node is running more than one daemon pod
Feb  9 14:57:02.928: INFO: Number of nodes with available pods: 0
Feb  9 14:57:02.928: INFO: Node iruya-node is running more than one daemon pod
Feb  9 14:57:03.926: INFO: Number of nodes with available pods: 0
Feb  9 14:57:03.927: INFO: Node iruya-node is running more than one daemon pod
Feb  9 14:57:04.926: INFO: Number of nodes with available pods: 0
Feb  9 14:57:04.926: INFO: Node iruya-node is running more than one daemon pod
Feb  9 14:57:05.988: INFO: Number of nodes with available pods: 0
Feb  9 14:57:05.988: INFO: Node iruya-node is running more than one daemon pod
Feb  9 14:57:06.929: INFO: Number of nodes with available pods: 0
Feb  9 14:57:06.929: INFO: Node iruya-node is running more than one daemon pod
Feb  9 14:57:07.926: INFO: Number of nodes with available pods: 0
Feb  9 14:57:07.926: INFO: Node iruya-node is running more than one daemon pod
Feb  9 14:57:08.928: INFO: Number of nodes with available pods: 0
Feb  9 14:57:08.928: INFO: Node iruya-node is running more than one daemon pod
Feb  9 14:57:09.926: INFO: Number of nodes with available pods: 0
Feb  9 14:57:09.926: INFO: Node iruya-node is running more than one daemon pod
Feb  9 14:57:10.925: INFO: Number of nodes with available pods: 0
Feb  9 14:57:10.925: INFO: Node iruya-node is running more than one daemon pod
Feb  9 14:57:11.938: INFO: Number of nodes with available pods: 0
Feb  9 14:57:11.938: INFO: Node iruya-node is running more than one daemon pod
Feb  9 14:57:12.927: INFO: Number of nodes with available pods: 0
Feb  9 14:57:12.927: INFO: Node iruya-node is running more than one daemon pod
Feb  9 14:57:13.926: INFO: Number of nodes with available pods: 0
Feb  9 14:57:13.926: INFO: Node iruya-node is running more than one daemon pod
Feb  9 14:57:14.926: INFO: Number of nodes with available pods: 1
Feb  9 14:57:14.926: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9302, will wait for the garbage collector to delete the pods
Feb  9 14:57:15.003: INFO: Deleting DaemonSet.extensions daemon-set took: 12.163815ms
Feb  9 14:57:15.304: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.846043ms
Feb  9 14:57:26.716: INFO: Number of nodes with available pods: 0
Feb  9 14:57:26.716: INFO: Number of running nodes: 0, number of available pods: 0
Feb  9 14:57:26.762: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9302/daemonsets","resourceVersion":"23710677"},"items":null}

Feb  9 14:57:26.768: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9302/pods","resourceVersion":"23710677"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 14:57:26.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-9302" for this suite.
Feb  9 14:57:32.851: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 14:57:33.152: INFO: namespace daemonsets-9302 deletion completed in 6.324981534s

• [SLOW TEST:50.766 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 14:57:33.153: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-6c7822ea-da60-49db-b89b-36098874bdcc
STEP: Creating a pod to test consume secrets
Feb  9 14:57:33.443: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-23bec24e-421b-4d3e-bd24-2db7761538bd" in namespace "projected-7213" to be "success or failure"
Feb  9 14:57:33.455: INFO: Pod "pod-projected-secrets-23bec24e-421b-4d3e-bd24-2db7761538bd": Phase="Pending", Reason="", readiness=false. Elapsed: 11.747211ms
Feb  9 14:57:35.463: INFO: Pod "pod-projected-secrets-23bec24e-421b-4d3e-bd24-2db7761538bd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019781234s
Feb  9 14:57:37.477: INFO: Pod "pod-projected-secrets-23bec24e-421b-4d3e-bd24-2db7761538bd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033531367s
Feb  9 14:57:39.487: INFO: Pod "pod-projected-secrets-23bec24e-421b-4d3e-bd24-2db7761538bd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043531875s
Feb  9 14:57:41.499: INFO: Pod "pod-projected-secrets-23bec24e-421b-4d3e-bd24-2db7761538bd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.056395661s
Feb  9 14:57:43.513: INFO: Pod "pod-projected-secrets-23bec24e-421b-4d3e-bd24-2db7761538bd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.070242445s
STEP: Saw pod success
Feb  9 14:57:43.514: INFO: Pod "pod-projected-secrets-23bec24e-421b-4d3e-bd24-2db7761538bd" satisfied condition "success or failure"
Feb  9 14:57:43.519: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-23bec24e-421b-4d3e-bd24-2db7761538bd container projected-secret-volume-test: 
STEP: delete the pod
Feb  9 14:57:43.693: INFO: Waiting for pod pod-projected-secrets-23bec24e-421b-4d3e-bd24-2db7761538bd to disappear
Feb  9 14:57:43.708: INFO: Pod pod-projected-secrets-23bec24e-421b-4d3e-bd24-2db7761538bd no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 14:57:43.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7213" for this suite.
Feb  9 14:57:49.739: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 14:57:49.900: INFO: namespace projected-7213 deletion completed in 6.18362729s

• [SLOW TEST:16.747 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 14:57:49.900: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-ed115e32-a87d-40a1-b847-67242f6b2039
STEP: Creating a pod to test consume configMaps
Feb  9 14:57:50.060: INFO: Waiting up to 5m0s for pod "pod-configmaps-f38b081a-d50a-4f10-a47c-c340c57b6dfc" in namespace "configmap-5472" to be "success or failure"
Feb  9 14:57:50.063: INFO: Pod "pod-configmaps-f38b081a-d50a-4f10-a47c-c340c57b6dfc": Phase="Pending", Reason="", readiness=false. Elapsed: 3.627072ms
Feb  9 14:57:52.075: INFO: Pod "pod-configmaps-f38b081a-d50a-4f10-a47c-c340c57b6dfc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014852136s
Feb  9 14:57:54.090: INFO: Pod "pod-configmaps-f38b081a-d50a-4f10-a47c-c340c57b6dfc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030764235s
Feb  9 14:57:56.101: INFO: Pod "pod-configmaps-f38b081a-d50a-4f10-a47c-c340c57b6dfc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041405724s
Feb  9 14:57:58.112: INFO: Pod "pod-configmaps-f38b081a-d50a-4f10-a47c-c340c57b6dfc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.052627629s
Feb  9 14:58:00.120: INFO: Pod "pod-configmaps-f38b081a-d50a-4f10-a47c-c340c57b6dfc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.060117892s
STEP: Saw pod success
Feb  9 14:58:00.120: INFO: Pod "pod-configmaps-f38b081a-d50a-4f10-a47c-c340c57b6dfc" satisfied condition "success or failure"
Feb  9 14:58:00.124: INFO: Trying to get logs from node iruya-node pod pod-configmaps-f38b081a-d50a-4f10-a47c-c340c57b6dfc container configmap-volume-test: 
STEP: delete the pod
Feb  9 14:58:00.286: INFO: Waiting for pod pod-configmaps-f38b081a-d50a-4f10-a47c-c340c57b6dfc to disappear
Feb  9 14:58:00.292: INFO: Pod pod-configmaps-f38b081a-d50a-4f10-a47c-c340c57b6dfc no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 14:58:00.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5472" for this suite.
Feb  9 14:58:06.366: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 14:58:06.481: INFO: namespace configmap-5472 deletion completed in 6.164743336s

• [SLOW TEST:16.581 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 14:58:06.482: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Feb  9 14:58:06.978: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-8611,SelfLink:/api/v1/namespaces/watch-8611/configmaps/e2e-watch-test-watch-closed,UID:68a6196d-3d3d-4aaf-82f6-e3729dde342d,ResourceVersion:23710801,Generation:0,CreationTimestamp:2020-02-09 14:58:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb  9 14:58:06.980: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-8611,SelfLink:/api/v1/namespaces/watch-8611/configmaps/e2e-watch-test-watch-closed,UID:68a6196d-3d3d-4aaf-82f6-e3729dde342d,ResourceVersion:23710802,Generation:0,CreationTimestamp:2020-02-09 14:58:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Feb  9 14:58:07.039: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-8611,SelfLink:/api/v1/namespaces/watch-8611/configmaps/e2e-watch-test-watch-closed,UID:68a6196d-3d3d-4aaf-82f6-e3729dde342d,ResourceVersion:23710804,Generation:0,CreationTimestamp:2020-02-09 14:58:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb  9 14:58:07.039: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-8611,SelfLink:/api/v1/namespaces/watch-8611/configmaps/e2e-watch-test-watch-closed,UID:68a6196d-3d3d-4aaf-82f6-e3729dde342d,ResourceVersion:23710805,Generation:0,CreationTimestamp:2020-02-09 14:58:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 14:58:07.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-8611" for this suite.
Feb  9 14:58:13.081: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 14:58:13.248: INFO: namespace watch-8611 deletion completed in 6.200266699s

• [SLOW TEST:6.767 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 14:58:13.249: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  9 14:58:13.358: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5d921ab3-bf6e-4598-894e-a622fbb1158b" in namespace "projected-41" to be "success or failure"
Feb  9 14:58:13.365: INFO: Pod "downwardapi-volume-5d921ab3-bf6e-4598-894e-a622fbb1158b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.849244ms
Feb  9 14:58:15.376: INFO: Pod "downwardapi-volume-5d921ab3-bf6e-4598-894e-a622fbb1158b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017069539s
Feb  9 14:58:17.383: INFO: Pod "downwardapi-volume-5d921ab3-bf6e-4598-894e-a622fbb1158b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024111055s
Feb  9 14:58:19.392: INFO: Pod "downwardapi-volume-5d921ab3-bf6e-4598-894e-a622fbb1158b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.033473784s
Feb  9 14:58:21.401: INFO: Pod "downwardapi-volume-5d921ab3-bf6e-4598-894e-a622fbb1158b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.04282372s
Feb  9 14:58:23.407: INFO: Pod "downwardapi-volume-5d921ab3-bf6e-4598-894e-a622fbb1158b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.048773067s
STEP: Saw pod success
Feb  9 14:58:23.407: INFO: Pod "downwardapi-volume-5d921ab3-bf6e-4598-894e-a622fbb1158b" satisfied condition "success or failure"
Feb  9 14:58:23.411: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-5d921ab3-bf6e-4598-894e-a622fbb1158b container client-container: 
STEP: delete the pod
Feb  9 14:58:23.492: INFO: Waiting for pod downwardapi-volume-5d921ab3-bf6e-4598-894e-a622fbb1158b to disappear
Feb  9 14:58:23.539: INFO: Pod downwardapi-volume-5d921ab3-bf6e-4598-894e-a622fbb1158b no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 14:58:23.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-41" for this suite.
Feb  9 14:58:29.578: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 14:58:29.724: INFO: namespace projected-41 deletion completed in 6.175094663s

• [SLOW TEST:16.476 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 14:58:29.725: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-79ecf173-8e6f-44a8-a217-ab98546418d8
STEP: Creating a pod to test consume secrets
Feb  9 14:58:29.838: INFO: Waiting up to 5m0s for pod "pod-secrets-7312f5ad-6c0e-4449-bd25-ac23a0453fb9" in namespace "secrets-7794" to be "success or failure"
Feb  9 14:58:29.875: INFO: Pod "pod-secrets-7312f5ad-6c0e-4449-bd25-ac23a0453fb9": Phase="Pending", Reason="", readiness=false. Elapsed: 36.337899ms
Feb  9 14:58:31.898: INFO: Pod "pod-secrets-7312f5ad-6c0e-4449-bd25-ac23a0453fb9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059102576s
Feb  9 14:58:33.908: INFO: Pod "pod-secrets-7312f5ad-6c0e-4449-bd25-ac23a0453fb9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069589068s
Feb  9 14:58:35.918: INFO: Pod "pod-secrets-7312f5ad-6c0e-4449-bd25-ac23a0453fb9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.079145335s
Feb  9 14:58:37.926: INFO: Pod "pod-secrets-7312f5ad-6c0e-4449-bd25-ac23a0453fb9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.087711133s
Feb  9 14:58:39.934: INFO: Pod "pod-secrets-7312f5ad-6c0e-4449-bd25-ac23a0453fb9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.095534182s
STEP: Saw pod success
Feb  9 14:58:39.934: INFO: Pod "pod-secrets-7312f5ad-6c0e-4449-bd25-ac23a0453fb9" satisfied condition "success or failure"
Feb  9 14:58:39.938: INFO: Trying to get logs from node iruya-node pod pod-secrets-7312f5ad-6c0e-4449-bd25-ac23a0453fb9 container secret-volume-test: 
STEP: delete the pod
Feb  9 14:58:40.042: INFO: Waiting for pod pod-secrets-7312f5ad-6c0e-4449-bd25-ac23a0453fb9 to disappear
Feb  9 14:58:40.056: INFO: Pod pod-secrets-7312f5ad-6c0e-4449-bd25-ac23a0453fb9 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 14:58:40.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7794" for this suite.
Feb  9 14:58:46.098: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 14:58:46.268: INFO: namespace secrets-7794 deletion completed in 6.205665778s

• [SLOW TEST:16.543 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 14:58:46.268: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  9 14:58:46.477: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"c0f49528-d05b-40eb-bcc3-faaa54902679", Controller:(*bool)(0xc0029b43c2), BlockOwnerDeletion:(*bool)(0xc0029b43c3)}}
Feb  9 14:58:46.486: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"a3708e61-5b9b-487e-92c4-80b127f1ac73", Controller:(*bool)(0xc001f6fc72), BlockOwnerDeletion:(*bool)(0xc001f6fc73)}}
Feb  9 14:58:46.547: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"abd9e8f2-6395-4109-aa03-1d7d10e59a9e", Controller:(*bool)(0xc0029b45a2), BlockOwnerDeletion:(*bool)(0xc0029b45a3)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 14:58:51.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-2875" for this suite.
Feb  9 14:58:57.617: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 14:58:57.757: INFO: namespace gc-2875 deletion completed in 6.161896268s

• [SLOW TEST:11.488 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 14:58:57.757: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-6cf9ffc2-5633-4590-ba0b-07c0ac7126bf
STEP: Creating a pod to test consume configMaps
Feb  9 14:58:57.902: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-384d478e-9a71-4e92-b743-44c4b52c701c" in namespace "projected-8948" to be "success or failure"
Feb  9 14:58:57.910: INFO: Pod "pod-projected-configmaps-384d478e-9a71-4e92-b743-44c4b52c701c": Phase="Pending", Reason="", readiness=false. Elapsed: 7.625459ms
Feb  9 14:58:59.919: INFO: Pod "pod-projected-configmaps-384d478e-9a71-4e92-b743-44c4b52c701c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016033104s
Feb  9 14:59:01.926: INFO: Pod "pod-projected-configmaps-384d478e-9a71-4e92-b743-44c4b52c701c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023108306s
Feb  9 14:59:03.936: INFO: Pod "pod-projected-configmaps-384d478e-9a71-4e92-b743-44c4b52c701c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03295425s
Feb  9 14:59:06.136: INFO: Pod "pod-projected-configmaps-384d478e-9a71-4e92-b743-44c4b52c701c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.233279258s
Feb  9 14:59:08.144: INFO: Pod "pod-projected-configmaps-384d478e-9a71-4e92-b743-44c4b52c701c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.241448013s
Feb  9 14:59:10.152: INFO: Pod "pod-projected-configmaps-384d478e-9a71-4e92-b743-44c4b52c701c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.249405388s
STEP: Saw pod success
Feb  9 14:59:10.152: INFO: Pod "pod-projected-configmaps-384d478e-9a71-4e92-b743-44c4b52c701c" satisfied condition "success or failure"
Feb  9 14:59:10.158: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-384d478e-9a71-4e92-b743-44c4b52c701c container projected-configmap-volume-test: 
STEP: delete the pod
Feb  9 14:59:10.329: INFO: Waiting for pod pod-projected-configmaps-384d478e-9a71-4e92-b743-44c4b52c701c to disappear
Feb  9 14:59:10.339: INFO: Pod pod-projected-configmaps-384d478e-9a71-4e92-b743-44c4b52c701c no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 14:59:10.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8948" for this suite.
Feb  9 14:59:16.380: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 14:59:16.562: INFO: namespace projected-8948 deletion completed in 6.217354078s

• [SLOW TEST:18.805 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 14:59:16.563: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Feb  9 14:59:16.647: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb  9 14:59:16.662: INFO: Waiting for terminating namespaces to be deleted...
Feb  9 14:59:16.665: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Feb  9 14:59:16.673: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Feb  9 14:59:16.673: INFO: 	Container weave ready: true, restart count 0
Feb  9 14:59:16.673: INFO: 	Container weave-npc ready: true, restart count 0
Feb  9 14:59:16.673: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Feb  9 14:59:16.673: INFO: 	Container kube-proxy ready: true, restart count 0
Feb  9 14:59:16.673: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Feb  9 14:59:16.718: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Feb  9 14:59:16.718: INFO: 	Container etcd ready: true, restart count 0
Feb  9 14:59:16.718: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Feb  9 14:59:16.718: INFO: 	Container weave ready: true, restart count 0
Feb  9 14:59:16.718: INFO: 	Container weave-npc ready: true, restart count 0
Feb  9 14:59:16.718: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Feb  9 14:59:16.718: INFO: 	Container coredns ready: true, restart count 0
Feb  9 14:59:16.718: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Feb  9 14:59:16.718: INFO: 	Container kube-controller-manager ready: true, restart count 20
Feb  9 14:59:16.718: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Feb  9 14:59:16.718: INFO: 	Container kube-proxy ready: true, restart count 0
Feb  9 14:59:16.718: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Feb  9 14:59:16.718: INFO: 	Container kube-apiserver ready: true, restart count 0
Feb  9 14:59:16.718: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Feb  9 14:59:16.718: INFO: 	Container kube-scheduler ready: true, restart count 13
Feb  9 14:59:16.718: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Feb  9 14:59:16.718: INFO: 	Container coredns ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15f1c3eaa4eec857], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 14:59:17.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-6152" for this suite.
Feb  9 14:59:23.804: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 14:59:23.974: INFO: namespace sched-pred-6152 deletion completed in 6.221845037s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:7.412 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 14:59:23.975: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0209 14:59:34.150934       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb  9 14:59:34.151: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 14:59:34.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-3162" for this suite.
Feb  9 14:59:42.178: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 14:59:42.251: INFO: namespace gc-3162 deletion completed in 8.097261095s

• [SLOW TEST:18.277 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 14:59:42.252: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0209 15:00:23.236568       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb  9 15:00:23.236: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 15:00:23.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-6041" for this suite.
Feb  9 15:00:33.311: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 15:00:33.730: INFO: namespace gc-6041 deletion completed in 10.488802062s

• [SLOW TEST:51.479 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 15:00:33.732: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Feb  9 15:00:34.137: INFO: Waiting up to 5m0s for pod "downwardapi-volume-634acdf2-7070-40f9-be88-43759b3c39c5" in namespace "downward-api-7298" to be "success or failure"
Feb  9 15:00:34.213: INFO: Pod "downwardapi-volume-634acdf2-7070-40f9-be88-43759b3c39c5": Phase="Pending", Reason="", readiness=false. Elapsed: 75.101965ms
Feb  9 15:00:36.908: INFO: Pod "downwardapi-volume-634acdf2-7070-40f9-be88-43759b3c39c5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.770281522s
Feb  9 15:00:38.922: INFO: Pod "downwardapi-volume-634acdf2-7070-40f9-be88-43759b3c39c5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.785091158s
Feb  9 15:00:40.933: INFO: Pod "downwardapi-volume-634acdf2-7070-40f9-be88-43759b3c39c5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.795171045s
Feb  9 15:00:42.947: INFO: Pod "downwardapi-volume-634acdf2-7070-40f9-be88-43759b3c39c5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.809746372s
Feb  9 15:00:44.962: INFO: Pod "downwardapi-volume-634acdf2-7070-40f9-be88-43759b3c39c5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.824864974s
Feb  9 15:00:46.972: INFO: Pod "downwardapi-volume-634acdf2-7070-40f9-be88-43759b3c39c5": Phase="Pending", Reason="", readiness=false. Elapsed: 12.835031437s
Feb  9 15:00:48.983: INFO: Pod "downwardapi-volume-634acdf2-7070-40f9-be88-43759b3c39c5": Phase="Pending", Reason="", readiness=false. Elapsed: 14.845526228s
Feb  9 15:00:50.989: INFO: Pod "downwardapi-volume-634acdf2-7070-40f9-be88-43759b3c39c5": Phase="Pending", Reason="", readiness=false. Elapsed: 16.851373576s
Feb  9 15:00:52.997: INFO: Pod "downwardapi-volume-634acdf2-7070-40f9-be88-43759b3c39c5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.859148278s
STEP: Saw pod success
Feb  9 15:00:52.997: INFO: Pod "downwardapi-volume-634acdf2-7070-40f9-be88-43759b3c39c5" satisfied condition "success or failure"
Feb  9 15:00:52.999: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-634acdf2-7070-40f9-be88-43759b3c39c5 container client-container: 
STEP: delete the pod
Feb  9 15:00:53.190: INFO: Waiting for pod downwardapi-volume-634acdf2-7070-40f9-be88-43759b3c39c5 to disappear
Feb  9 15:00:53.206: INFO: Pod downwardapi-volume-634acdf2-7070-40f9-be88-43759b3c39c5 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 15:00:53.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7298" for this suite.
Feb  9 15:00:59.244: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 15:00:59.386: INFO: namespace downward-api-7298 deletion completed in 6.17219166s

• [SLOW TEST:25.654 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 15:00:59.386: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292
STEP: creating an rc
Feb  9 15:00:59.543: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1951'
Feb  9 15:01:02.319: INFO: stderr: ""
Feb  9 15:01:02.319: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Waiting for Redis master to start.
Feb  9 15:01:03.328: INFO: Selector matched 1 pods for map[app:redis]
Feb  9 15:01:03.328: INFO: Found 0 / 1
Feb  9 15:01:04.328: INFO: Selector matched 1 pods for map[app:redis]
Feb  9 15:01:04.328: INFO: Found 0 / 1
Feb  9 15:01:05.329: INFO: Selector matched 1 pods for map[app:redis]
Feb  9 15:01:05.329: INFO: Found 0 / 1
Feb  9 15:01:06.331: INFO: Selector matched 1 pods for map[app:redis]
Feb  9 15:01:06.331: INFO: Found 0 / 1
Feb  9 15:01:07.329: INFO: Selector matched 1 pods for map[app:redis]
Feb  9 15:01:07.329: INFO: Found 0 / 1
Feb  9 15:01:08.382: INFO: Selector matched 1 pods for map[app:redis]
Feb  9 15:01:08.382: INFO: Found 0 / 1
Feb  9 15:01:09.331: INFO: Selector matched 1 pods for map[app:redis]
Feb  9 15:01:09.331: INFO: Found 0 / 1
Feb  9 15:01:10.339: INFO: Selector matched 1 pods for map[app:redis]
Feb  9 15:01:10.339: INFO: Found 0 / 1
Feb  9 15:01:11.334: INFO: Selector matched 1 pods for map[app:redis]
Feb  9 15:01:11.335: INFO: Found 1 / 1
Feb  9 15:01:11.335: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Feb  9 15:01:11.340: INFO: Selector matched 1 pods for map[app:redis]
Feb  9 15:01:11.341: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Feb  9 15:01:11.341: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-zzds5 redis-master --namespace=kubectl-1951'
Feb  9 15:01:11.524: INFO: stderr: ""
Feb  9 15:01:11.524: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 09 Feb 15:01:09.313 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 09 Feb 15:01:09.314 # Server started, Redis version 3.2.12\n1:M 09 Feb 15:01:09.314 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 09 Feb 15:01:09.314 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Feb  9 15:01:11.525: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-zzds5 redis-master --namespace=kubectl-1951 --tail=1'
Feb  9 15:01:11.696: INFO: stderr: ""
Feb  9 15:01:11.696: INFO: stdout: "1:M 09 Feb 15:01:09.314 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Feb  9 15:01:11.696: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-zzds5 redis-master --namespace=kubectl-1951 --limit-bytes=1'
Feb  9 15:01:11.849: INFO: stderr: ""
Feb  9 15:01:11.849: INFO: stdout: " "
STEP: exposing timestamps
Feb  9 15:01:11.849: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-zzds5 redis-master --namespace=kubectl-1951 --tail=1 --timestamps'
Feb  9 15:01:11.991: INFO: stderr: ""
Feb  9 15:01:11.991: INFO: stdout: "2020-02-09T15:01:09.315132385Z 1:M 09 Feb 15:01:09.314 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Feb  9 15:01:14.492: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-zzds5 redis-master --namespace=kubectl-1951 --since=1s'
Feb  9 15:01:14.682: INFO: stderr: ""
Feb  9 15:01:14.682: INFO: stdout: ""
Feb  9 15:01:14.682: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-zzds5 redis-master --namespace=kubectl-1951 --since=24h'
Feb  9 15:01:15.254: INFO: stderr: ""
Feb  9 15:01:15.254: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 09 Feb 15:01:09.313 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 09 Feb 15:01:09.314 # Server started, Redis version 3.2.12\n1:M 09 Feb 15:01:09.314 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 09 Feb 15:01:09.314 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
STEP: using delete to clean up resources
Feb  9 15:01:15.254: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1951'
Feb  9 15:01:15.429: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  9 15:01:15.429: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Feb  9 15:01:15.429: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-1951'
Feb  9 15:01:15.553: INFO: stderr: "No resources found.\n"
Feb  9 15:01:15.554: INFO: stdout: ""
Feb  9 15:01:15.554: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-1951 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb  9 15:01:15.701: INFO: stderr: ""
Feb  9 15:01:15.701: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 15:01:15.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1951" for this suite.
Feb  9 15:01:37.738: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 15:01:37.907: INFO: namespace kubectl-1951 deletion completed in 22.199658687s

• [SLOW TEST:38.521 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 15:01:37.908: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating replication controller my-hostname-basic-3076f9f9-9c6e-41b3-8ce0-cc4bd8c91bc4
Feb  9 15:01:38.081: INFO: Pod name my-hostname-basic-3076f9f9-9c6e-41b3-8ce0-cc4bd8c91bc4: Found 0 pods out of 1
Feb  9 15:01:43.092: INFO: Pod name my-hostname-basic-3076f9f9-9c6e-41b3-8ce0-cc4bd8c91bc4: Found 1 pods out of 1
Feb  9 15:01:43.092: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-3076f9f9-9c6e-41b3-8ce0-cc4bd8c91bc4" are running
Feb  9 15:01:47.103: INFO: Pod "my-hostname-basic-3076f9f9-9c6e-41b3-8ce0-cc4bd8c91bc4-r62zg" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-09 15:01:38 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-09 15:01:38 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-3076f9f9-9c6e-41b3-8ce0-cc4bd8c91bc4]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-09 15:01:38 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-3076f9f9-9c6e-41b3-8ce0-cc4bd8c91bc4]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-09 15:01:38 +0000 UTC Reason: Message:}])
Feb  9 15:01:47.103: INFO: Trying to dial the pod
Feb  9 15:01:52.156: INFO: Controller my-hostname-basic-3076f9f9-9c6e-41b3-8ce0-cc4bd8c91bc4: Got expected result from replica 1 [my-hostname-basic-3076f9f9-9c6e-41b3-8ce0-cc4bd8c91bc4-r62zg]: "my-hostname-basic-3076f9f9-9c6e-41b3-8ce0-cc4bd8c91bc4-r62zg", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 15:01:52.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-1505" for this suite.
Feb  9 15:01:58.196: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 15:01:58.330: INFO: namespace replication-controller-1505 deletion completed in 6.166842611s

• [SLOW TEST:20.422 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 15:01:58.331: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-442b003b-7369-4b63-87d4-def47f9da827
STEP: Creating a pod to test consume secrets
Feb  9 15:01:58.432: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a152f424-a9f1-4d9e-bdf0-19ce578a20e2" in namespace "projected-9710" to be "success or failure"
Feb  9 15:01:58.439: INFO: Pod "pod-projected-secrets-a152f424-a9f1-4d9e-bdf0-19ce578a20e2": Phase="Pending", Reason="", readiness=false. Elapsed: 7.15618ms
Feb  9 15:02:00.448: INFO: Pod "pod-projected-secrets-a152f424-a9f1-4d9e-bdf0-19ce578a20e2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015846514s
Feb  9 15:02:02.457: INFO: Pod "pod-projected-secrets-a152f424-a9f1-4d9e-bdf0-19ce578a20e2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025299329s
Feb  9 15:02:04.482: INFO: Pod "pod-projected-secrets-a152f424-a9f1-4d9e-bdf0-19ce578a20e2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049484245s
Feb  9 15:02:06.494: INFO: Pod "pod-projected-secrets-a152f424-a9f1-4d9e-bdf0-19ce578a20e2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.061515204s
Feb  9 15:02:08.509: INFO: Pod "pod-projected-secrets-a152f424-a9f1-4d9e-bdf0-19ce578a20e2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.076461844s
Feb  9 15:02:10.523: INFO: Pod "pod-projected-secrets-a152f424-a9f1-4d9e-bdf0-19ce578a20e2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.090618936s
STEP: Saw pod success
Feb  9 15:02:10.523: INFO: Pod "pod-projected-secrets-a152f424-a9f1-4d9e-bdf0-19ce578a20e2" satisfied condition "success or failure"
Feb  9 15:02:10.535: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-a152f424-a9f1-4d9e-bdf0-19ce578a20e2 container projected-secret-volume-test: 
STEP: delete the pod
Feb  9 15:02:10.623: INFO: Waiting for pod pod-projected-secrets-a152f424-a9f1-4d9e-bdf0-19ce578a20e2 to disappear
Feb  9 15:02:10.628: INFO: Pod pod-projected-secrets-a152f424-a9f1-4d9e-bdf0-19ce578a20e2 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 15:02:10.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9710" for this suite.
Feb  9 15:02:16.669: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 15:02:16.804: INFO: namespace projected-9710 deletion completed in 6.168798402s

• [SLOW TEST:18.474 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 15:02:16.805: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 15:02:17.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-8563" for this suite.
Feb  9 15:02:23.056: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 15:02:23.217: INFO: namespace kubelet-test-8563 deletion completed in 6.18067116s

• [SLOW TEST:6.412 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 15:02:23.217: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb  9 15:02:32.419: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 15:02:32.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-5385" for this suite.
Feb  9 15:02:40.578: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 15:02:40.677: INFO: namespace container-runtime-5385 deletion completed in 8.20433763s

• [SLOW TEST:17.460 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period 
  should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 15:02:40.677: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47
[It] should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up selector
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
Feb  9 15:02:50.946: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0'
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Feb  9 15:03:01.073: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 15:03:01.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1991" for this suite.
Feb  9 15:03:07.198: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 15:03:07.312: INFO: namespace pods-1991 deletion completed in 6.204428435s

• [SLOW TEST:26.635 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 15:03:07.313: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-configmap-xd76
STEP: Creating a pod to test atomic-volume-subpath
Feb  9 15:03:07.498: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-xd76" in namespace "subpath-4047" to be "success or failure"
Feb  9 15:03:07.503: INFO: Pod "pod-subpath-test-configmap-xd76": Phase="Pending", Reason="", readiness=false. Elapsed: 4.763853ms
Feb  9 15:03:09.512: INFO: Pod "pod-subpath-test-configmap-xd76": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013852655s
Feb  9 15:03:11.517: INFO: Pod "pod-subpath-test-configmap-xd76": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019656186s
Feb  9 15:03:13.541: INFO: Pod "pod-subpath-test-configmap-xd76": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043330662s
Feb  9 15:03:15.561: INFO: Pod "pod-subpath-test-configmap-xd76": Phase="Pending", Reason="", readiness=false. Elapsed: 8.062755825s
Feb  9 15:03:17.569: INFO: Pod "pod-subpath-test-configmap-xd76": Phase="Pending", Reason="", readiness=false. Elapsed: 10.071515352s
Feb  9 15:03:19.580: INFO: Pod "pod-subpath-test-configmap-xd76": Phase="Running", Reason="", readiness=true. Elapsed: 12.081859669s
Feb  9 15:03:21.594: INFO: Pod "pod-subpath-test-configmap-xd76": Phase="Running", Reason="", readiness=true. Elapsed: 14.096457666s
Feb  9 15:03:23.610: INFO: Pod "pod-subpath-test-configmap-xd76": Phase="Running", Reason="", readiness=true. Elapsed: 16.111856867s
Feb  9 15:03:25.622: INFO: Pod "pod-subpath-test-configmap-xd76": Phase="Running", Reason="", readiness=true. Elapsed: 18.123728929s
Feb  9 15:03:27.628: INFO: Pod "pod-subpath-test-configmap-xd76": Phase="Running", Reason="", readiness=true. Elapsed: 20.130673857s
Feb  9 15:03:29.636: INFO: Pod "pod-subpath-test-configmap-xd76": Phase="Running", Reason="", readiness=true. Elapsed: 22.138067561s
Feb  9 15:03:31.645: INFO: Pod "pod-subpath-test-configmap-xd76": Phase="Running", Reason="", readiness=true. Elapsed: 24.147069341s
Feb  9 15:03:33.654: INFO: Pod "pod-subpath-test-configmap-xd76": Phase="Running", Reason="", readiness=true. Elapsed: 26.156470999s
Feb  9 15:03:35.665: INFO: Pod "pod-subpath-test-configmap-xd76": Phase="Running", Reason="", readiness=true. Elapsed: 28.167245015s
Feb  9 15:03:37.674: INFO: Pod "pod-subpath-test-configmap-xd76": Phase="Running", Reason="", readiness=true. Elapsed: 30.17568354s
Feb  9 15:03:39.683: INFO: Pod "pod-subpath-test-configmap-xd76": Phase="Running", Reason="", readiness=true. Elapsed: 32.185182508s
Feb  9 15:03:41.696: INFO: Pod "pod-subpath-test-configmap-xd76": Phase="Succeeded", Reason="", readiness=false. Elapsed: 34.198509843s
STEP: Saw pod success
Feb  9 15:03:41.697: INFO: Pod "pod-subpath-test-configmap-xd76" satisfied condition "success or failure"
Feb  9 15:03:41.700: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-xd76 container test-container-subpath-configmap-xd76: 
STEP: delete the pod
Feb  9 15:03:41.781: INFO: Waiting for pod pod-subpath-test-configmap-xd76 to disappear
Feb  9 15:03:41.789: INFO: Pod pod-subpath-test-configmap-xd76 no longer exists
STEP: Deleting pod pod-subpath-test-configmap-xd76
Feb  9 15:03:41.789: INFO: Deleting pod "pod-subpath-test-configmap-xd76" in namespace "subpath-4047"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 15:03:41.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-4047" for this suite.
Feb  9 15:03:47.861: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 15:03:48.013: INFO: namespace subpath-4047 deletion completed in 6.207472959s

• [SLOW TEST:40.700 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 15:03:48.014: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating all guestbook components
Feb  9 15:03:48.324: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Feb  9 15:03:48.324: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8902'
Feb  9 15:03:48.724: INFO: stderr: ""
Feb  9 15:03:48.724: INFO: stdout: "service/redis-slave created\n"
Feb  9 15:03:48.725: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Feb  9 15:03:48.726: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8902'
Feb  9 15:03:49.136: INFO: stderr: ""
Feb  9 15:03:49.136: INFO: stdout: "service/redis-master created\n"
Feb  9 15:03:49.137: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Feb  9 15:03:49.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8902'
Feb  9 15:03:49.544: INFO: stderr: ""
Feb  9 15:03:49.544: INFO: stdout: "service/frontend created\n"
Feb  9 15:03:49.545: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Feb  9 15:03:49.545: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8902'
Feb  9 15:03:49.993: INFO: stderr: ""
Feb  9 15:03:49.993: INFO: stdout: "deployment.apps/frontend created\n"
Feb  9 15:03:49.994: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Feb  9 15:03:49.994: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8902'
Feb  9 15:03:50.570: INFO: stderr: ""
Feb  9 15:03:50.570: INFO: stdout: "deployment.apps/redis-master created\n"
Feb  9 15:03:50.571: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: redis
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Feb  9 15:03:50.571: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8902'
Feb  9 15:03:51.663: INFO: stderr: ""
Feb  9 15:03:51.663: INFO: stdout: "deployment.apps/redis-slave created\n"
STEP: validating guestbook app
Feb  9 15:03:51.663: INFO: Waiting for all frontend pods to be Running.
Feb  9 15:04:16.716: INFO: Waiting for frontend to serve content.
Feb  9 15:04:18.231: INFO: Trying to add a new entry to the guestbook.
Feb  9 15:04:18.399: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Feb  9 15:04:18.493: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8902'
Feb  9 15:04:18.807: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  9 15:04:18.807: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Feb  9 15:04:18.809: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8902'
Feb  9 15:04:18.956: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  9 15:04:18.957: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Feb  9 15:04:18.957: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8902'
Feb  9 15:04:19.179: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  9 15:04:19.179: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Feb  9 15:04:19.180: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8902'
Feb  9 15:04:19.350: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  9 15:04:19.350: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Feb  9 15:04:19.351: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8902'
Feb  9 15:04:19.514: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  9 15:04:19.514: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Feb  9 15:04:19.515: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8902'
Feb  9 15:04:19.852: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  9 15:04:19.852: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 15:04:19.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8902" for this suite.
Feb  9 15:05:00.005: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 15:05:00.092: INFO: namespace kubectl-8902 deletion completed in 40.203660062s

• [SLOW TEST:72.078 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 15:05:00.092: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service endpoint-test2 in namespace services-4853
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4853 to expose endpoints map[]
Feb  9 15:05:00.256: INFO: successfully validated that service endpoint-test2 in namespace services-4853 exposes endpoints map[] (13.198697ms elapsed)
STEP: Creating pod pod1 in namespace services-4853
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4853 to expose endpoints map[pod1:[80]]
Feb  9 15:05:04.385: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.108740715s elapsed, will retry)
Feb  9 15:05:09.453: INFO: successfully validated that service endpoint-test2 in namespace services-4853 exposes endpoints map[pod1:[80]] (9.176398912s elapsed)
STEP: Creating pod pod2 in namespace services-4853
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4853 to expose endpoints map[pod1:[80] pod2:[80]]
Feb  9 15:05:14.689: INFO: Unexpected endpoints: found map[0b6cc214-a576-454e-bca9-cfa2ce39e152:[80]], expected map[pod1:[80] pod2:[80]] (5.225803752s elapsed, will retry)
Feb  9 15:05:16.734: INFO: successfully validated that service endpoint-test2 in namespace services-4853 exposes endpoints map[pod1:[80] pod2:[80]] (7.271035162s elapsed)
STEP: Deleting pod pod1 in namespace services-4853
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4853 to expose endpoints map[pod2:[80]]
Feb  9 15:05:16.840: INFO: successfully validated that service endpoint-test2 in namespace services-4853 exposes endpoints map[pod2:[80]] (91.579127ms elapsed)
STEP: Deleting pod pod2 in namespace services-4853
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4853 to expose endpoints map[]
Feb  9 15:05:17.901: INFO: successfully validated that service endpoint-test2 in namespace services-4853 exposes endpoints map[] (1.033484354s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 15:05:18.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4853" for this suite.
Feb  9 15:05:40.072: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 15:05:40.195: INFO: namespace services-4853 deletion completed in 22.146642829s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:40.103 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 15:05:40.195: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  9 15:05:40.351: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 15:05:50.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9209" for this suite.
Feb  9 15:06:42.913: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 15:06:43.037: INFO: namespace pods-9209 deletion completed in 52.145045365s

• [SLOW TEST:62.842 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 15:06:43.038: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 15:06:53.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-3914" for this suite.
Feb  9 15:06:59.443: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 15:06:59.559: INFO: namespace emptydir-wrapper-3914 deletion completed in 6.140499532s

• [SLOW TEST:16.522 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 15:06:59.560: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-69b14644-9bee-427e-bd68-7ce5faadf70b
STEP: Creating a pod to test consume secrets
Feb  9 15:06:59.741: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f48071c7-f812-4a15-aa7e-92cf378a3de4" in namespace "projected-7031" to be "success or failure"
Feb  9 15:06:59.762: INFO: Pod "pod-projected-secrets-f48071c7-f812-4a15-aa7e-92cf378a3de4": Phase="Pending", Reason="", readiness=false. Elapsed: 20.336465ms
Feb  9 15:07:01.770: INFO: Pod "pod-projected-secrets-f48071c7-f812-4a15-aa7e-92cf378a3de4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028381783s
Feb  9 15:07:03.795: INFO: Pod "pod-projected-secrets-f48071c7-f812-4a15-aa7e-92cf378a3de4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054045524s
Feb  9 15:07:05.987: INFO: Pod "pod-projected-secrets-f48071c7-f812-4a15-aa7e-92cf378a3de4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.245206654s
Feb  9 15:07:07.993: INFO: Pod "pod-projected-secrets-f48071c7-f812-4a15-aa7e-92cf378a3de4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.251648173s
Feb  9 15:07:10.001: INFO: Pod "pod-projected-secrets-f48071c7-f812-4a15-aa7e-92cf378a3de4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.259353627s
STEP: Saw pod success
Feb  9 15:07:10.001: INFO: Pod "pod-projected-secrets-f48071c7-f812-4a15-aa7e-92cf378a3de4" satisfied condition "success or failure"
Feb  9 15:07:10.007: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-f48071c7-f812-4a15-aa7e-92cf378a3de4 container projected-secret-volume-test: 
STEP: delete the pod
Feb  9 15:07:11.137: INFO: Waiting for pod pod-projected-secrets-f48071c7-f812-4a15-aa7e-92cf378a3de4 to disappear
Feb  9 15:07:11.149: INFO: Pod pod-projected-secrets-f48071c7-f812-4a15-aa7e-92cf378a3de4 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 15:07:11.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7031" for this suite.
Feb  9 15:07:17.237: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 15:07:17.437: INFO: namespace projected-7031 deletion completed in 6.278215862s

• [SLOW TEST:17.877 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 15:07:17.437: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0209 15:07:47.745820       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb  9 15:07:47.745: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 15:07:47.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-6197" for this suite.
Feb  9 15:07:54.248: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 15:07:54.977: INFO: namespace gc-6197 deletion completed in 7.228611645s

• [SLOW TEST:37.540 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 15:07:54.978: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on node default medium
Feb  9 15:07:55.400: INFO: Waiting up to 5m0s for pod "pod-8a79aae2-8c4e-41d9-95d5-3ed88c6ad336" in namespace "emptydir-9713" to be "success or failure"
Feb  9 15:07:55.438: INFO: Pod "pod-8a79aae2-8c4e-41d9-95d5-3ed88c6ad336": Phase="Pending", Reason="", readiness=false. Elapsed: 38.406941ms
Feb  9 15:07:57.543: INFO: Pod "pod-8a79aae2-8c4e-41d9-95d5-3ed88c6ad336": Phase="Pending", Reason="", readiness=false. Elapsed: 2.142574427s
Feb  9 15:07:59.549: INFO: Pod "pod-8a79aae2-8c4e-41d9-95d5-3ed88c6ad336": Phase="Pending", Reason="", readiness=false. Elapsed: 4.149327131s
Feb  9 15:08:01.559: INFO: Pod "pod-8a79aae2-8c4e-41d9-95d5-3ed88c6ad336": Phase="Pending", Reason="", readiness=false. Elapsed: 6.158516561s
Feb  9 15:08:03.570: INFO: Pod "pod-8a79aae2-8c4e-41d9-95d5-3ed88c6ad336": Phase="Pending", Reason="", readiness=false. Elapsed: 8.170131409s
Feb  9 15:08:05.603: INFO: Pod "pod-8a79aae2-8c4e-41d9-95d5-3ed88c6ad336": Phase="Running", Reason="", readiness=true. Elapsed: 10.202999153s
Feb  9 15:08:07.626: INFO: Pod "pod-8a79aae2-8c4e-41d9-95d5-3ed88c6ad336": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.225723416s
STEP: Saw pod success
Feb  9 15:08:07.626: INFO: Pod "pod-8a79aae2-8c4e-41d9-95d5-3ed88c6ad336" satisfied condition "success or failure"
Feb  9 15:08:07.631: INFO: Trying to get logs from node iruya-node pod pod-8a79aae2-8c4e-41d9-95d5-3ed88c6ad336 container test-container: 
STEP: delete the pod
Feb  9 15:08:07.709: INFO: Waiting for pod pod-8a79aae2-8c4e-41d9-95d5-3ed88c6ad336 to disappear
Feb  9 15:08:07.719: INFO: Pod pod-8a79aae2-8c4e-41d9-95d5-3ed88c6ad336 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 15:08:07.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9713" for this suite.
Feb  9 15:08:13.751: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 15:08:13.915: INFO: namespace emptydir-9713 deletion completed in 6.186184424s

• [SLOW TEST:18.937 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 15:08:13.916: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with configMap that has name projected-configmap-test-upd-24b5ec32-00a2-49ca-9c70-2455a92c39b3
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-24b5ec32-00a2-49ca-9c70-2455a92c39b3
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 15:09:54.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1362" for this suite.
Feb  9 15:10:16.077: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 15:10:16.192: INFO: namespace projected-1362 deletion completed in 22.137140931s

• [SLOW TEST:122.276 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 15:10:16.193: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  9 15:10:16.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1194'
Feb  9 15:10:16.709: INFO: stderr: ""
Feb  9 15:10:16.709: INFO: stdout: "replicationcontroller/redis-master created\n"
Feb  9 15:10:16.710: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1194'
Feb  9 15:10:17.189: INFO: stderr: ""
Feb  9 15:10:17.189: INFO: stdout: "service/redis-master created\n"
STEP: Waiting for Redis master to start.
Feb  9 15:10:18.197: INFO: Selector matched 1 pods for map[app:redis]
Feb  9 15:10:18.198: INFO: Found 0 / 1
Feb  9 15:10:19.237: INFO: Selector matched 1 pods for map[app:redis]
Feb  9 15:10:19.237: INFO: Found 0 / 1
Feb  9 15:10:20.198: INFO: Selector matched 1 pods for map[app:redis]
Feb  9 15:10:20.199: INFO: Found 0 / 1
Feb  9 15:10:21.230: INFO: Selector matched 1 pods for map[app:redis]
Feb  9 15:10:21.230: INFO: Found 0 / 1
Feb  9 15:10:22.196: INFO: Selector matched 1 pods for map[app:redis]
Feb  9 15:10:22.197: INFO: Found 0 / 1
Feb  9 15:10:23.260: INFO: Selector matched 1 pods for map[app:redis]
Feb  9 15:10:23.260: INFO: Found 0 / 1
Feb  9 15:10:24.199: INFO: Selector matched 1 pods for map[app:redis]
Feb  9 15:10:24.199: INFO: Found 0 / 1
Feb  9 15:10:25.198: INFO: Selector matched 1 pods for map[app:redis]
Feb  9 15:10:25.198: INFO: Found 1 / 1
Feb  9 15:10:25.198: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Feb  9 15:10:25.202: INFO: Selector matched 1 pods for map[app:redis]
Feb  9 15:10:25.202: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb  9 15:10:25.202: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-rk4z8 --namespace=kubectl-1194'
Feb  9 15:10:25.368: INFO: stderr: ""
Feb  9 15:10:25.368: INFO: stdout: "Name:           redis-master-rk4z8\nNamespace:      kubectl-1194\nPriority:       0\nNode:           iruya-node/10.96.3.65\nStart Time:     Sun, 09 Feb 2020 15:10:16 +0000\nLabels:         app=redis\n                role=master\nAnnotations:    \nStatus:         Running\nIP:             10.44.0.1\nControlled By:  ReplicationController/redis-master\nContainers:\n  redis-master:\n    Container ID:   docker://692a8336799cf3bd704f0e103b56e92525355f7f2cb7b87e7a8d18b0edb11107\n    Image:          gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Image ID:       docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Sun, 09 Feb 2020 15:10:24 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-r8tlz (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-r8tlz:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-r8tlz\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From                 Message\n  ----    ------     ----  ----                 -------\n  Normal  Scheduled  9s    default-scheduler    Successfully assigned kubectl-1194/redis-master-rk4z8 to iruya-node\n  Normal  Pulled     5s    kubelet, iruya-node  Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n  Normal  Created    2s    kubelet, iruya-node  Created container redis-master\n  Normal  Started    1s    kubelet, iruya-node  Started container redis-master\n"
Feb  9 15:10:25.368: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-1194'
Feb  9 15:10:25.562: INFO: stderr: ""
Feb  9 15:10:25.563: INFO: stdout: "Name:         redis-master\nNamespace:    kubectl-1194\nSelector:     app=redis,role=master\nLabels:       app=redis\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=redis\n           role=master\n  Containers:\n   redis-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  9s    replication-controller  Created pod: redis-master-rk4z8\n"
Feb  9 15:10:25.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-1194'
Feb  9 15:10:25.659: INFO: stderr: ""
Feb  9 15:10:25.659: INFO: stdout: "Name:              redis-master\nNamespace:         kubectl-1194\nLabels:            app=redis\n                   role=master\nAnnotations:       \nSelector:          app=redis,role=master\nType:              ClusterIP\nIP:                10.98.22.244\nPort:                6379/TCP\nTargetPort:        redis-server/TCP\nEndpoints:         10.44.0.1:6379\nSession Affinity:  None\nEvents:            \n"
Feb  9 15:10:25.666: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-node'
Feb  9 15:10:25.788: INFO: stderr: ""
Feb  9 15:10:25.788: INFO: stdout: "Name:               iruya-node\nRoles:              \nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=iruya-node\n                    kubernetes.io/os=linux\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sun, 04 Aug 2019 09:01:39 +0000\nTaints:             \nUnschedulable:      false\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Sat, 12 Oct 2019 11:56:49 +0000   Sat, 12 Oct 2019 11:56:49 +0000   WeaveIsUp                    Weave pod has set this\n  MemoryPressure       False   Sun, 09 Feb 2020 15:10:17 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Sun, 09 Feb 2020 15:10:17 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Sun, 09 Feb 2020 15:10:17 +0000   Sun, 04 Aug 2019 09:01:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Sun, 09 Feb 2020 15:10:17 +0000   Sun, 04 Aug 2019 09:02:19 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:  10.96.3.65\n  Hostname:    iruya-node\nCapacity:\n cpu:                4\n ephemeral-storage:  20145724Ki\n hugepages-2Mi:      0\n memory:             4039076Ki\n pods:               110\nAllocatable:\n cpu:                4\n ephemeral-storage:  18566299208\n hugepages-2Mi:      0\n memory:             3936676Ki\n pods:               110\nSystem Info:\n Machine ID:                 f573dcf04d6f4a87856a35d266a2fa7a\n System UUID:                F573DCF0-4D6F-4A87-856A-35D266A2FA7A\n Boot ID:                    8baf4beb-8391-43e6-b17b-b1e184b5370a\n Kernel Version:             4.15.0-52-generic\n OS Image:                   Ubuntu 18.04.2 LTS\n Operating System:           linux\n Architecture:               amd64\n Container Runtime Version:  docker://18.9.7\n Kubelet Version:            v1.15.1\n Kube-Proxy Version:         v1.15.1\nPodCIDR:                     10.96.1.0/24\nNon-terminated Pods:         (3 in total)\n  Namespace                  Name                  CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                  ----                  ------------  ----------  ---------------  -------------  ---\n  kube-system                kube-proxy-976zl      0 (0%)        0 (0%)      0 (0%)           0 (0%)         189d\n  kube-system                weave-net-rlp57       20m (0%)      0 (0%)      0 (0%)           0 (0%)         120d\n  kubectl-1194               redis-master-rk4z8    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests  Limits\n  --------           --------  ------\n  cpu                20m (0%)  0 (0%)\n  memory             0 (0%)    0 (0%)\n  ephemeral-storage  0 (0%)    0 (0%)\nEvents:              \n"
Feb  9 15:10:25.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-1194'
Feb  9 15:10:26.030: INFO: stderr: ""
Feb  9 15:10:26.030: INFO: stdout: "Name:         kubectl-1194\nLabels:       e2e-framework=kubectl\n              e2e-run=06b627d1-debe-4764-8f4b-2f9996ebffea\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo resource limits.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 15:10:26.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1194" for this suite.
Feb  9 15:10:48.135: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 15:10:48.224: INFO: namespace kubectl-1194 deletion completed in 22.18843133s

• [SLOW TEST:32.032 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 15:10:48.225: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179
[It] should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 15:10:48.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3322" for this suite.
Feb  9 15:11:10.629: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 15:11:10.743: INFO: namespace pods-3322 deletion completed in 22.195465677s

• [SLOW TEST:22.518 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 15:11:10.745: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-6949/configmap-test-d935cb05-3c45-4ef8-b953-c889aa10b3d5
STEP: Creating a pod to test consume configMaps
Feb  9 15:11:10.983: INFO: Waiting up to 5m0s for pod "pod-configmaps-512f8155-c677-40e6-a752-5055df2c214a" in namespace "configmap-6949" to be "success or failure"
Feb  9 15:11:11.004: INFO: Pod "pod-configmaps-512f8155-c677-40e6-a752-5055df2c214a": Phase="Pending", Reason="", readiness=false. Elapsed: 20.287828ms
Feb  9 15:11:13.012: INFO: Pod "pod-configmaps-512f8155-c677-40e6-a752-5055df2c214a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028151477s
Feb  9 15:11:15.019: INFO: Pod "pod-configmaps-512f8155-c677-40e6-a752-5055df2c214a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035362795s
Feb  9 15:11:17.028: INFO: Pod "pod-configmaps-512f8155-c677-40e6-a752-5055df2c214a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044317118s
Feb  9 15:11:19.036: INFO: Pod "pod-configmaps-512f8155-c677-40e6-a752-5055df2c214a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.052090475s
Feb  9 15:11:21.049: INFO: Pod "pod-configmaps-512f8155-c677-40e6-a752-5055df2c214a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.065420833s
STEP: Saw pod success
Feb  9 15:11:21.049: INFO: Pod "pod-configmaps-512f8155-c677-40e6-a752-5055df2c214a" satisfied condition "success or failure"
Feb  9 15:11:21.054: INFO: Trying to get logs from node iruya-node pod pod-configmaps-512f8155-c677-40e6-a752-5055df2c214a container env-test: 
STEP: delete the pod
Feb  9 15:11:21.115: INFO: Waiting for pod pod-configmaps-512f8155-c677-40e6-a752-5055df2c214a to disappear
Feb  9 15:11:21.172: INFO: Pod pod-configmaps-512f8155-c677-40e6-a752-5055df2c214a no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 15:11:21.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6949" for this suite.
Feb  9 15:11:27.205: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 15:11:27.356: INFO: namespace configmap-6949 deletion completed in 6.175340034s

• [SLOW TEST:16.611 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 15:11:27.357: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod busybox-cf14ddb4-38a0-4350-b761-5a34da2b1944 in namespace container-probe-3136
Feb  9 15:11:35.544: INFO: Started pod busybox-cf14ddb4-38a0-4350-b761-5a34da2b1944 in namespace container-probe-3136
STEP: checking the pod's current state and verifying that restartCount is present
Feb  9 15:11:35.550: INFO: Initial restart count of pod busybox-cf14ddb4-38a0-4350-b761-5a34da2b1944 is 0
Feb  9 15:12:31.801: INFO: Restart count of pod container-probe-3136/busybox-cf14ddb4-38a0-4350-b761-5a34da2b1944 is now 1 (56.251177989s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 15:12:31.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-3136" for this suite.
Feb  9 15:12:39.914: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 15:12:40.076: INFO: namespace container-probe-3136 deletion completed in 8.22059092s

• [SLOW TEST:72.719 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 15:12:40.077: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Feb  9 15:12:40.185: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1505,SelfLink:/api/v1/namespaces/watch-1505/configmaps/e2e-watch-test-label-changed,UID:a527fcb6-3702-4365-a2e7-ce0b278db51a,ResourceVersion:23713050,Generation:0,CreationTimestamp:2020-02-09 15:12:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb  9 15:12:40.185: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1505,SelfLink:/api/v1/namespaces/watch-1505/configmaps/e2e-watch-test-label-changed,UID:a527fcb6-3702-4365-a2e7-ce0b278db51a,ResourceVersion:23713051,Generation:0,CreationTimestamp:2020-02-09 15:12:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Feb  9 15:12:40.185: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1505,SelfLink:/api/v1/namespaces/watch-1505/configmaps/e2e-watch-test-label-changed,UID:a527fcb6-3702-4365-a2e7-ce0b278db51a,ResourceVersion:23713052,Generation:0,CreationTimestamp:2020-02-09 15:12:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Feb  9 15:12:50.229: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1505,SelfLink:/api/v1/namespaces/watch-1505/configmaps/e2e-watch-test-label-changed,UID:a527fcb6-3702-4365-a2e7-ce0b278db51a,ResourceVersion:23713067,Generation:0,CreationTimestamp:2020-02-09 15:12:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb  9 15:12:50.229: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1505,SelfLink:/api/v1/namespaces/watch-1505/configmaps/e2e-watch-test-label-changed,UID:a527fcb6-3702-4365-a2e7-ce0b278db51a,ResourceVersion:23713068,Generation:0,CreationTimestamp:2020-02-09 15:12:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Feb  9 15:12:50.230: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-1505,SelfLink:/api/v1/namespaces/watch-1505/configmaps/e2e-watch-test-label-changed,UID:a527fcb6-3702-4365-a2e7-ce0b278db51a,ResourceVersion:23713069,Generation:0,CreationTimestamp:2020-02-09 15:12:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 15:12:50.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-1505" for this suite.
Feb  9 15:12:56.342: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 15:12:56.487: INFO: namespace watch-1505 deletion completed in 6.232499217s

• [SLOW TEST:16.410 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 15:12:56.488: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Feb  9 15:12:56.619: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5163'
Feb  9 15:12:58.784: INFO: stderr: ""
Feb  9 15:12:58.784: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb  9 15:12:58.785: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5163'
Feb  9 15:12:59.010: INFO: stderr: ""
Feb  9 15:12:59.010: INFO: stdout: "update-demo-nautilus-5sd2p update-demo-nautilus-bvfgg "
Feb  9 15:12:59.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5sd2p -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5163'
Feb  9 15:12:59.207: INFO: stderr: ""
Feb  9 15:12:59.207: INFO: stdout: ""
Feb  9 15:12:59.207: INFO: update-demo-nautilus-5sd2p is created but not running
Feb  9 15:13:04.208: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5163'
Feb  9 15:13:05.237: INFO: stderr: ""
Feb  9 15:13:05.237: INFO: stdout: "update-demo-nautilus-5sd2p update-demo-nautilus-bvfgg "
Feb  9 15:13:05.238: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5sd2p -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5163'
Feb  9 15:13:05.601: INFO: stderr: ""
Feb  9 15:13:05.601: INFO: stdout: ""
Feb  9 15:13:05.601: INFO: update-demo-nautilus-5sd2p is created but not running
Feb  9 15:13:10.602: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5163'
Feb  9 15:13:10.745: INFO: stderr: ""
Feb  9 15:13:10.745: INFO: stdout: "update-demo-nautilus-5sd2p update-demo-nautilus-bvfgg "
Feb  9 15:13:10.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5sd2p -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5163'
Feb  9 15:13:10.864: INFO: stderr: ""
Feb  9 15:13:10.864: INFO: stdout: "true"
Feb  9 15:13:10.864: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5sd2p -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5163'
Feb  9 15:13:10.991: INFO: stderr: ""
Feb  9 15:13:10.991: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  9 15:13:10.991: INFO: validating pod update-demo-nautilus-5sd2p
Feb  9 15:13:11.003: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  9 15:13:11.003: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  9 15:13:11.003: INFO: update-demo-nautilus-5sd2p is verified up and running
Feb  9 15:13:11.003: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bvfgg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5163'
Feb  9 15:13:11.090: INFO: stderr: ""
Feb  9 15:13:11.090: INFO: stdout: "true"
Feb  9 15:13:11.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bvfgg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5163'
Feb  9 15:13:11.168: INFO: stderr: ""
Feb  9 15:13:11.168: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb  9 15:13:11.169: INFO: validating pod update-demo-nautilus-bvfgg
Feb  9 15:13:11.197: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb  9 15:13:11.197: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb  9 15:13:11.197: INFO: update-demo-nautilus-bvfgg is verified up and running
STEP: using delete to clean up resources
Feb  9 15:13:11.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5163'
Feb  9 15:13:11.318: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb  9 15:13:11.318: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Feb  9 15:13:11.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5163'
Feb  9 15:13:11.493: INFO: stderr: "No resources found.\n"
Feb  9 15:13:11.493: INFO: stdout: ""
Feb  9 15:13:11.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5163 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb  9 15:13:11.750: INFO: stderr: ""
Feb  9 15:13:11.750: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 15:13:11.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5163" for this suite.
Feb  9 15:13:33.791: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 15:13:33.928: INFO: namespace kubectl-5163 deletion completed in 22.160357559s

• [SLOW TEST:37.441 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 15:13:33.929: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 15:13:42.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-6162" for this suite.
Feb  9 15:14:28.157: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 15:14:28.345: INFO: namespace kubelet-test-6162 deletion completed in 46.216658038s

• [SLOW TEST:54.416 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 15:14:28.346: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating pod
Feb  9 15:14:36.486: INFO: Pod pod-hostip-c64c9f36-8991-4ed5-b672-0ee3534f9152 has hostIP: 10.96.3.65
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 15:14:36.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4845" for this suite.
Feb  9 15:14:58.537: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 15:14:58.757: INFO: namespace pods-4845 deletion completed in 22.263694928s

• [SLOW TEST:30.412 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 15:14:58.759: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Starting the proxy
Feb  9 15:14:58.959: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix800397219/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 15:14:59.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3577" for this suite.
Feb  9 15:15:05.119: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 15:15:05.391: INFO: namespace kubectl-3577 deletion completed in 6.301026566s

• [SLOW TEST:6.632 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support --unix-socket=/path  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 15:15:05.392: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Feb  9 15:15:05.464: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-643,SelfLink:/api/v1/namespaces/watch-643/configmaps/e2e-watch-test-configmap-a,UID:939d59e9-139c-4ad8-9da6-4843c45e39f9,ResourceVersion:23713369,Generation:0,CreationTimestamp:2020-02-09 15:15:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb  9 15:15:05.464: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-643,SelfLink:/api/v1/namespaces/watch-643/configmaps/e2e-watch-test-configmap-a,UID:939d59e9-139c-4ad8-9da6-4843c45e39f9,ResourceVersion:23713369,Generation:0,CreationTimestamp:2020-02-09 15:15:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Feb  9 15:15:15.487: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-643,SelfLink:/api/v1/namespaces/watch-643/configmaps/e2e-watch-test-configmap-a,UID:939d59e9-139c-4ad8-9da6-4843c45e39f9,ResourceVersion:23713383,Generation:0,CreationTimestamp:2020-02-09 15:15:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Feb  9 15:15:15.487: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-643,SelfLink:/api/v1/namespaces/watch-643/configmaps/e2e-watch-test-configmap-a,UID:939d59e9-139c-4ad8-9da6-4843c45e39f9,ResourceVersion:23713383,Generation:0,CreationTimestamp:2020-02-09 15:15:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Feb  9 15:15:25.503: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-643,SelfLink:/api/v1/namespaces/watch-643/configmaps/e2e-watch-test-configmap-a,UID:939d59e9-139c-4ad8-9da6-4843c45e39f9,ResourceVersion:23713398,Generation:0,CreationTimestamp:2020-02-09 15:15:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb  9 15:15:25.504: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-643,SelfLink:/api/v1/namespaces/watch-643/configmaps/e2e-watch-test-configmap-a,UID:939d59e9-139c-4ad8-9da6-4843c45e39f9,ResourceVersion:23713398,Generation:0,CreationTimestamp:2020-02-09 15:15:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Feb  9 15:15:35.521: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-643,SelfLink:/api/v1/namespaces/watch-643/configmaps/e2e-watch-test-configmap-a,UID:939d59e9-139c-4ad8-9da6-4843c45e39f9,ResourceVersion:23713412,Generation:0,CreationTimestamp:2020-02-09 15:15:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Feb  9 15:15:35.521: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-643,SelfLink:/api/v1/namespaces/watch-643/configmaps/e2e-watch-test-configmap-a,UID:939d59e9-139c-4ad8-9da6-4843c45e39f9,ResourceVersion:23713412,Generation:0,CreationTimestamp:2020-02-09 15:15:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Feb  9 15:15:45.551: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-643,SelfLink:/api/v1/namespaces/watch-643/configmaps/e2e-watch-test-configmap-b,UID:5a395ad7-ba9f-416a-9a9c-ecd50caece04,ResourceVersion:23713425,Generation:0,CreationTimestamp:2020-02-09 15:15:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb  9 15:15:45.551: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-643,SelfLink:/api/v1/namespaces/watch-643/configmaps/e2e-watch-test-configmap-b,UID:5a395ad7-ba9f-416a-9a9c-ecd50caece04,ResourceVersion:23713425,Generation:0,CreationTimestamp:2020-02-09 15:15:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Feb  9 15:15:55.565: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-643,SelfLink:/api/v1/namespaces/watch-643/configmaps/e2e-watch-test-configmap-b,UID:5a395ad7-ba9f-416a-9a9c-ecd50caece04,ResourceVersion:23713439,Generation:0,CreationTimestamp:2020-02-09 15:15:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Feb  9 15:15:55.566: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-643,SelfLink:/api/v1/namespaces/watch-643/configmaps/e2e-watch-test-configmap-b,UID:5a395ad7-ba9f-416a-9a9c-ecd50caece04,ResourceVersion:23713439,Generation:0,CreationTimestamp:2020-02-09 15:15:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 15:16:05.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-643" for this suite.
Feb  9 15:16:11.652: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 15:16:11.829: INFO: namespace watch-643 deletion completed in 6.253029778s

• [SLOW TEST:66.438 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 15:16:11.830: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating api versions
Feb  9 15:16:11.929: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Feb  9 15:16:12.245: INFO: stderr: ""
Feb  9 15:16:12.245: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 15:16:12.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9456" for this suite.
Feb  9 15:16:18.278: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 15:16:18.423: INFO: namespace kubectl-9456 deletion completed in 6.170542586s

• [SLOW TEST:6.594 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl api-versions
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if v1 is in available api versions  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 15:16:18.424: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-f23e7200-db24-46ac-82b2-c722df763256
STEP: Creating secret with name s-test-opt-upd-510c6218-8395-4451-8f93-593aff5a3015
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-f23e7200-db24-46ac-82b2-c722df763256
STEP: Updating secret s-test-opt-upd-510c6218-8395-4451-8f93-593aff5a3015
STEP: Creating secret with name s-test-opt-create-45a9725c-1471-4b39-b417-8a66c715eb96
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 15:17:40.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2091" for this suite.
Feb  9 15:18:02.294: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 15:18:02.388: INFO: namespace secrets-2091 deletion completed in 22.117532526s

• [SLOW TEST:103.965 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 15:18:02.389: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  9 15:18:02.837: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb  9 15:18:12.869: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Feb  9 15:18:12.948: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-9147,SelfLink:/apis/apps/v1/namespaces/deployment-9147/deployments/test-cleanup-deployment,UID:18d4009a-11e6-4679-a35d-2aa6223772af,ResourceVersion:23713696,Generation:1,CreationTimestamp:2020-02-09 15:18:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}

Feb  9 15:18:13.004: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-9147,SelfLink:/apis/apps/v1/namespaces/deployment-9147/replicasets/test-cleanup-deployment-55bbcbc84c,UID:518dfab1-fc12-42d5-bfbe-38a9eda56ac1,ResourceVersion:23713698,Generation:1,CreationTimestamp:2020-02-09 15:18:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 18d4009a-11e6-4679-a35d-2aa6223772af 0xc001cb0d27 0xc001cb0d28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Feb  9 15:18:13.004: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment":
Feb  9 15:18:13.004: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-9147,SelfLink:/apis/apps/v1/namespaces/deployment-9147/replicasets/test-cleanup-controller,UID:c2843c68-4bb6-494a-aa53-559a12a98b0a,ResourceVersion:23713697,Generation:1,CreationTimestamp:2020-02-09 15:18:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 18d4009a-11e6-4679-a35d-2aa6223772af 0xc001cb0c57 0xc001cb0c58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Feb  9 15:18:13.024: INFO: Pod "test-cleanup-controller-76p8g" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-76p8g,GenerateName:test-cleanup-controller-,Namespace:deployment-9147,SelfLink:/api/v1/namespaces/deployment-9147/pods/test-cleanup-controller-76p8g,UID:df5c522d-014c-4bf0-914f-bf2e3a7f06f9,ResourceVersion:23713694,Generation:0,CreationTimestamp:2020-02-09 15:18:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller c2843c68-4bb6-494a-aa53-559a12a98b0a 0xc0027912f7 0xc0027912f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qknlr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qknlr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-qknlr true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002791370} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002791390}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 15:18:02 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 15:18:11 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 15:18:11 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-09 15:18:02 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-02-09 15:18:02 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-02-09 15:18:11 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://85ef16eabcddc29a846a6c045a51f206c496b866770bd1b61ca83a77505a707b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Feb  9 15:18:13.024: INFO: Pod "test-cleanup-deployment-55bbcbc84c-z526r" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-z526r,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-9147,SelfLink:/api/v1/namespaces/deployment-9147/pods/test-cleanup-deployment-55bbcbc84c-z526r,UID:bb5d0dad-0d5f-4983-9c78-8c5e515a4635,ResourceVersion:23713701,Generation:0,CreationTimestamp:2020-02-09 15:18:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c 518dfab1-fc12-42d5-bfbe-38a9eda56ac1 0xc002791477 0xc002791478}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qknlr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qknlr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-qknlr true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0027914e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002791500}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 15:18:13.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-9147" for this suite.
Feb  9 15:18:19.145: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 15:18:19.224: INFO: namespace deployment-9147 deletion completed in 6.184178662s

• [SLOW TEST:16.836 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 15:18:19.225: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  9 15:18:19.289: INFO: Creating ReplicaSet my-hostname-basic-99065e4e-9842-45e1-9beb-374ce396e6f7
Feb  9 15:18:19.412: INFO: Pod name my-hostname-basic-99065e4e-9842-45e1-9beb-374ce396e6f7: Found 1 pods out of 1
Feb  9 15:18:19.412: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-99065e4e-9842-45e1-9beb-374ce396e6f7" is running
Feb  9 15:18:31.450: INFO: Pod "my-hostname-basic-99065e4e-9842-45e1-9beb-374ce396e6f7-mm598" is running (conditions: [])
Feb  9 15:18:31.450: INFO: Trying to dial the pod
Feb  9 15:18:36.489: INFO: Controller my-hostname-basic-99065e4e-9842-45e1-9beb-374ce396e6f7: Got expected result from replica 1 [my-hostname-basic-99065e4e-9842-45e1-9beb-374ce396e6f7-mm598]: "my-hostname-basic-99065e4e-9842-45e1-9beb-374ce396e6f7-mm598", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 15:18:36.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-3588" for this suite.
Feb  9 15:18:42.532: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 15:18:42.689: INFO: namespace replicaset-3588 deletion completed in 6.19310113s

• [SLOW TEST:23.464 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Feb  9 15:18:42.691: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-f0c869d7-b2f1-4411-90fb-3507d79d3945
STEP: Creating a pod to test consume secrets
Feb  9 15:18:42.824: INFO: Waiting up to 5m0s for pod "pod-secrets-08e54fad-2bf0-4927-8b01-4ddec8de3b2e" in namespace "secrets-9882" to be "success or failure"
Feb  9 15:18:42.838: INFO: Pod "pod-secrets-08e54fad-2bf0-4927-8b01-4ddec8de3b2e": Phase="Pending", Reason="", readiness=false. Elapsed: 13.192039ms
Feb  9 15:18:44.850: INFO: Pod "pod-secrets-08e54fad-2bf0-4927-8b01-4ddec8de3b2e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025149745s
Feb  9 15:18:46.881: INFO: Pod "pod-secrets-08e54fad-2bf0-4927-8b01-4ddec8de3b2e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056212807s
Feb  9 15:18:48.891: INFO: Pod "pod-secrets-08e54fad-2bf0-4927-8b01-4ddec8de3b2e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.066109763s
Feb  9 15:18:50.904: INFO: Pod "pod-secrets-08e54fad-2bf0-4927-8b01-4ddec8de3b2e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.079463469s
Feb  9 15:18:52.932: INFO: Pod "pod-secrets-08e54fad-2bf0-4927-8b01-4ddec8de3b2e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.106996173s
STEP: Saw pod success
Feb  9 15:18:52.932: INFO: Pod "pod-secrets-08e54fad-2bf0-4927-8b01-4ddec8de3b2e" satisfied condition "success or failure"
Feb  9 15:18:52.937: INFO: Trying to get logs from node iruya-node pod pod-secrets-08e54fad-2bf0-4927-8b01-4ddec8de3b2e container secret-volume-test: 
STEP: delete the pod
Feb  9 15:18:53.126: INFO: Waiting for pod pod-secrets-08e54fad-2bf0-4927-8b01-4ddec8de3b2e to disappear
Feb  9 15:18:53.130: INFO: Pod pod-secrets-08e54fad-2bf0-4927-8b01-4ddec8de3b2e no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  9 15:18:53.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9882" for this suite.
Feb  9 15:18:59.148: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Feb  9 15:18:59.267: INFO: namespace secrets-9882 deletion completed in 6.132931132s

• [SLOW TEST:16.576 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SFeb  9 15:18:59.267: INFO: Running AfterSuite actions on all nodes
Feb  9 15:18:59.267: INFO: Running AfterSuite actions on node 1
Feb  9 15:18:59.267: INFO: Skipping dumping logs from cluster


Summarizing 1 Failure:

[Fail] [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] [It] Should recreate evicted statefulset [Conformance] 
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:769

Ran 215 of 4412 Specs in 8569.645 seconds
FAIL! -- 214 Passed | 1 Failed | 0 Pending | 4197 Skipped
--- FAIL: TestE2E (8569.95s)
FAIL