I0705 12:55:50.692641 6 e2e.go:243] Starting e2e run "7631942a-f1ca-46b8-a9fe-6e83c7f4dcb8" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1593953749 - Will randomize all specs Will run 215 of 4413 specs Jul 5 12:55:50.872: INFO: >>> kubeConfig: /root/.kube/config Jul 5 12:55:50.876: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jul 5 12:55:50.904: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jul 5 12:55:50.936: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jul 5 12:55:50.936: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jul 5 12:55:50.936: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jul 5 12:55:50.947: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Jul 5 12:55:50.947: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jul 5 12:55:50.947: INFO: e2e test version: v1.15.12 Jul 5 12:55:50.949: INFO: kube-apiserver version: v1.15.11 SSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 5 12:55:50.949: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe Jul 5 12:55:50.996: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod test-webserver-2e819c32-aaaa-40c3-b292-cd7ca08083fa in namespace container-probe-6635 Jul 5 12:55:55.084: INFO: Started pod test-webserver-2e819c32-aaaa-40c3-b292-cd7ca08083fa in namespace container-probe-6635 STEP: checking the pod's current state and verifying that restartCount is present Jul 5 12:55:55.088: INFO: Initial restart count of pod test-webserver-2e819c32-aaaa-40c3-b292-cd7ca08083fa is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 5 12:59:55.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6635" for this suite. Jul 5 13:00:02.217: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 13:00:02.293: INFO: namespace container-probe-6635 deletion completed in 6.393560127s • [SLOW TEST:251.344 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 5 13:00:02.295: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-3987 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Jul 5 13:00:02.427: INFO: Found 0 stateful pods, waiting for 3 Jul 5 13:00:12.432: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jul 5 13:00:12.432: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jul 5 13:00:12.432: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jul 5 13:00:22.433: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jul 5 13:00:22.433: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jul 5 13:00:22.433: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Jul 5 13:00:22.444: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3987 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jul 5 13:00:25.231: INFO: stderr: "I0705 13:00:25.118960 30 log.go:172] (0xc0008d2370) (0xc000632960) Create stream\nI0705 13:00:25.119051 30 log.go:172] (0xc0008d2370) (0xc000632960) Stream added, broadcasting: 1\nI0705 13:00:25.122739 30 log.go:172] (0xc0008d2370) Reply frame received for 1\nI0705 13:00:25.122762 30 log.go:172] (0xc0008d2370) (0xc000632a00) Create stream\nI0705 13:00:25.122768 30 log.go:172] (0xc0008d2370) (0xc000632a00) Stream added, broadcasting: 3\nI0705 13:00:25.123765 30 log.go:172] (0xc0008d2370) Reply frame received for 3\nI0705 13:00:25.123813 30 log.go:172] (0xc0008d2370) (0xc00086c000) Create stream\nI0705 13:00:25.123827 30 log.go:172] (0xc0008d2370) (0xc00086c000) Stream added, broadcasting: 5\nI0705 13:00:25.124858 30 log.go:172] (0xc0008d2370) Reply frame received for 5\nI0705 13:00:25.192728 30 log.go:172] (0xc0008d2370) Data frame received for 5\nI0705 13:00:25.192764 30 log.go:172] (0xc00086c000) (5) Data frame handling\nI0705 13:00:25.192784 30 log.go:172] (0xc00086c000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0705 13:00:25.222691 30 log.go:172] (0xc0008d2370) Data frame received for 3\nI0705 13:00:25.222820 30 log.go:172] (0xc000632a00) (3) Data frame handling\nI0705 13:00:25.222953 30 log.go:172] (0xc000632a00) (3) Data frame sent\nI0705 13:00:25.223180 30 log.go:172] (0xc0008d2370) Data frame received for 3\nI0705 13:00:25.223235 30 log.go:172] (0xc000632a00) (3) Data frame handling\nI0705 13:00:25.223271 30 log.go:172] (0xc0008d2370) Data frame received for 5\nI0705 13:00:25.223294 30 log.go:172] (0xc00086c000) (5) Data frame handling\nI0705 13:00:25.225063 30 log.go:172] (0xc0008d2370) Data frame received for 1\nI0705 13:00:25.225108 30 log.go:172] (0xc000632960) (1) Data frame handling\nI0705 13:00:25.225360 30 log.go:172] (0xc000632960) (1) Data frame sent\nI0705 13:00:25.225393 30 log.go:172] (0xc0008d2370) (0xc000632960) Stream removed, broadcasting: 1\nI0705 13:00:25.225417 30 log.go:172] (0xc0008d2370) Go away received\nI0705 13:00:25.226081 30 log.go:172] (0xc0008d2370) (0xc000632960) Stream removed, broadcasting: 1\nI0705 13:00:25.226104 30 log.go:172] (0xc0008d2370) (0xc000632a00) Stream removed, broadcasting: 3\nI0705 13:00:25.226116 30 log.go:172] (0xc0008d2370) (0xc00086c000) Stream removed, broadcasting: 5\n" Jul 5 13:00:25.231: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jul 5 13:00:25.231: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Jul 5 13:00:35.261: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Jul 5 13:00:45.290: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3987 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 5 13:00:45.524: INFO: stderr: "I0705 13:00:45.432639 61 log.go:172] (0xc00013a6e0) (0xc00021c960) Create stream\nI0705 13:00:45.432690 61 log.go:172] (0xc00013a6e0) (0xc00021c960) Stream added, broadcasting: 1\nI0705 13:00:45.435398 61 log.go:172] (0xc00013a6e0) Reply frame received for 1\nI0705 13:00:45.435428 61 log.go:172] (0xc00013a6e0) (0xc00021ca00) Create stream\nI0705 13:00:45.435438 61 log.go:172] (0xc00013a6e0) (0xc00021ca00) Stream added, broadcasting: 3\nI0705 13:00:45.436716 61 log.go:172] (0xc00013a6e0) Reply frame received for 3\nI0705 13:00:45.436757 61 log.go:172] (0xc00013a6e0) (0xc000974000) Create stream\nI0705 13:00:45.436770 61 log.go:172] (0xc00013a6e0) (0xc000974000) Stream added, broadcasting: 5\nI0705 13:00:45.438407 61 log.go:172] (0xc00013a6e0) Reply frame received for 5\nI0705 13:00:45.518209 61 log.go:172] (0xc00013a6e0) Data frame received for 3\nI0705 13:00:45.518266 61 log.go:172] (0xc00021ca00) (3) Data frame handling\nI0705 13:00:45.518281 61 log.go:172] (0xc00021ca00) (3) Data frame sent\nI0705 13:00:45.518291 61 log.go:172] (0xc00013a6e0) Data frame received for 3\nI0705 13:00:45.518301 61 log.go:172] (0xc00021ca00) (3) Data frame handling\nI0705 13:00:45.518343 61 log.go:172] (0xc00013a6e0) Data frame received for 5\nI0705 13:00:45.518361 61 log.go:172] (0xc000974000) (5) Data frame handling\nI0705 13:00:45.518389 61 log.go:172] (0xc000974000) (5) Data frame sent\nI0705 13:00:45.518427 61 log.go:172] (0xc00013a6e0) Data frame received for 5\nI0705 13:00:45.518450 61 log.go:172] (0xc000974000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0705 13:00:45.519370 61 log.go:172] (0xc00013a6e0) Data frame received for 1\nI0705 13:00:45.519389 61 log.go:172] (0xc00021c960) (1) Data frame handling\nI0705 13:00:45.519399 61 log.go:172] (0xc00021c960) (1) Data frame sent\nI0705 13:00:45.519411 61 log.go:172] (0xc00013a6e0) (0xc00021c960) Stream removed, broadcasting: 1\nI0705 13:00:45.519427 61 log.go:172] (0xc00013a6e0) Go away received\nI0705 13:00:45.519913 61 log.go:172] (0xc00013a6e0) (0xc00021c960) Stream removed, broadcasting: 1\nI0705 13:00:45.519935 61 log.go:172] (0xc00013a6e0) (0xc00021ca00) Stream removed, broadcasting: 3\nI0705 13:00:45.519951 61 log.go:172] (0xc00013a6e0) (0xc000974000) Stream removed, broadcasting: 5\n" Jul 5 13:00:45.524: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jul 5 13:00:45.525: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jul 5 13:01:15.546: INFO: Waiting for StatefulSet statefulset-3987/ss2 to complete update STEP: Rolling back to a previous revision Jul 5 13:01:25.556: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3987 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jul 5 13:01:25.784: INFO: stderr: "I0705 13:01:25.686030 80 log.go:172] (0xc0007ec630) (0xc0006de780) Create stream\nI0705 13:01:25.686090 80 log.go:172] (0xc0007ec630) (0xc0006de780) Stream added, broadcasting: 1\nI0705 13:01:25.688271 80 log.go:172] (0xc0007ec630) Reply frame received for 1\nI0705 13:01:25.688318 80 log.go:172] (0xc0007ec630) (0xc000784000) Create stream\nI0705 13:01:25.688333 80 log.go:172] (0xc0007ec630) (0xc000784000) Stream added, broadcasting: 3\nI0705 13:01:25.689797 80 log.go:172] (0xc0007ec630) Reply frame received for 3\nI0705 13:01:25.689837 80 log.go:172] (0xc0007ec630) (0xc0006de820) Create stream\nI0705 13:01:25.689854 80 log.go:172] (0xc0007ec630) (0xc0006de820) Stream added, broadcasting: 5\nI0705 13:01:25.690963 80 log.go:172] (0xc0007ec630) Reply frame received for 5\nI0705 13:01:25.749083 80 log.go:172] (0xc0007ec630) Data frame received for 5\nI0705 13:01:25.749105 80 log.go:172] (0xc0006de820) (5) Data frame handling\nI0705 13:01:25.749322 80 log.go:172] (0xc0006de820) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0705 13:01:25.776476 80 log.go:172] (0xc0007ec630) Data frame received for 3\nI0705 13:01:25.776508 80 log.go:172] (0xc000784000) (3) Data frame handling\nI0705 13:01:25.776645 80 log.go:172] (0xc000784000) (3) Data frame sent\nI0705 13:01:25.776781 80 log.go:172] (0xc0007ec630) Data frame received for 3\nI0705 13:01:25.776797 80 log.go:172] (0xc000784000) (3) Data frame handling\nI0705 13:01:25.777103 80 log.go:172] (0xc0007ec630) Data frame received for 5\nI0705 13:01:25.777330 80 log.go:172] (0xc0006de820) (5) Data frame handling\nI0705 13:01:25.778953 80 log.go:172] (0xc0007ec630) Data frame received for 1\nI0705 13:01:25.778994 80 log.go:172] (0xc0006de780) (1) Data frame handling\nI0705 13:01:25.779017 80 log.go:172] (0xc0006de780) (1) Data frame sent\nI0705 13:01:25.779046 80 log.go:172] (0xc0007ec630) (0xc0006de780) Stream removed, broadcasting: 1\nI0705 13:01:25.779080 80 log.go:172] (0xc0007ec630) Go away received\nI0705 13:01:25.779524 80 log.go:172] (0xc0007ec630) (0xc0006de780) Stream removed, broadcasting: 1\nI0705 13:01:25.779548 80 log.go:172] (0xc0007ec630) (0xc000784000) Stream removed, broadcasting: 3\nI0705 13:01:25.779558 80 log.go:172] (0xc0007ec630) (0xc0006de820) Stream removed, broadcasting: 5\n" Jul 5 13:01:25.784: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jul 5 13:01:25.784: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jul 5 13:01:35.816: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Jul 5 13:01:45.843: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3987 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jul 5 13:01:46.048: INFO: stderr: "I0705 13:01:45.980675 100 log.go:172] (0xc0009aa420) (0xc0006b06e0) Create stream\nI0705 13:01:45.980727 100 log.go:172] (0xc0009aa420) (0xc0006b06e0) Stream added, broadcasting: 1\nI0705 13:01:45.984061 100 log.go:172] (0xc0009aa420) Reply frame received for 1\nI0705 13:01:45.984105 100 log.go:172] (0xc0009aa420) (0xc0006b0000) Create stream\nI0705 13:01:45.984129 100 log.go:172] (0xc0009aa420) (0xc0006b0000) Stream added, broadcasting: 3\nI0705 13:01:45.985556 100 log.go:172] (0xc0009aa420) Reply frame received for 3\nI0705 13:01:45.985609 100 log.go:172] (0xc0009aa420) (0xc0006b00a0) Create stream\nI0705 13:01:45.985625 100 log.go:172] (0xc0009aa420) (0xc0006b00a0) Stream added, broadcasting: 5\nI0705 13:01:45.986606 100 log.go:172] (0xc0009aa420) Reply frame received for 5\nI0705 13:01:46.042245 100 log.go:172] (0xc0009aa420) Data frame received for 5\nI0705 13:01:46.042280 100 log.go:172] (0xc0006b00a0) (5) Data frame handling\nI0705 13:01:46.042292 100 log.go:172] (0xc0006b00a0) (5) Data frame sent\nI0705 13:01:46.042302 100 log.go:172] (0xc0009aa420) Data frame received for 5\nI0705 13:01:46.042311 100 log.go:172] (0xc0006b00a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0705 13:01:46.042337 100 log.go:172] (0xc0009aa420) Data frame received for 3\nI0705 13:01:46.042350 100 log.go:172] (0xc0006b0000) (3) Data frame handling\nI0705 13:01:46.042366 100 log.go:172] (0xc0006b0000) (3) Data frame sent\nI0705 13:01:46.042376 100 log.go:172] (0xc0009aa420) Data frame received for 3\nI0705 13:01:46.042384 100 log.go:172] (0xc0006b0000) (3) Data frame handling\nI0705 13:01:46.043907 100 log.go:172] (0xc0009aa420) Data frame received for 1\nI0705 13:01:46.043937 100 log.go:172] (0xc0006b06e0) (1) Data frame handling\nI0705 13:01:46.043955 100 log.go:172] (0xc0006b06e0) (1) Data frame sent\nI0705 13:01:46.043972 100 log.go:172] (0xc0009aa420) (0xc0006b06e0) Stream removed, broadcasting: 1\nI0705 13:01:46.043990 100 log.go:172] (0xc0009aa420) Go away received\nI0705 13:01:46.044328 100 log.go:172] (0xc0009aa420) (0xc0006b06e0) Stream removed, broadcasting: 1\nI0705 13:01:46.044346 100 log.go:172] (0xc0009aa420) (0xc0006b0000) Stream removed, broadcasting: 3\nI0705 13:01:46.044354 100 log.go:172] (0xc0009aa420) (0xc0006b00a0) Stream removed, broadcasting: 5\n" Jul 5 13:01:46.049: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jul 5 13:01:46.049: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jul 5 13:01:56.100: INFO: Waiting for StatefulSet statefulset-3987/ss2 to complete update Jul 5 13:01:56.100: INFO: Waiting for Pod statefulset-3987/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jul 5 13:01:56.100: INFO: Waiting for Pod statefulset-3987/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jul 5 13:02:06.109: INFO: Waiting for StatefulSet statefulset-3987/ss2 to complete update Jul 5 13:02:06.109: INFO: Waiting for Pod statefulset-3987/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jul 5 13:02:16.109: INFO: Waiting for StatefulSet statefulset-3987/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Jul 5 13:02:26.108: INFO: Deleting all statefulset in ns statefulset-3987 Jul 5 13:02:26.112: INFO: Scaling statefulset ss2 to 0 Jul 5 13:02:46.130: INFO: Waiting for statefulset status.replicas updated to 0 Jul 5 13:02:46.133: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 5 13:02:46.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3987" for this suite. Jul 5 13:02:54.159: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 13:02:54.236: INFO: namespace statefulset-3987 deletion completed in 8.087143622s • [SLOW TEST:171.941 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 5 13:02:54.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: executing a command with run --rm and attach with stdin Jul 5 13:02:54.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3685 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Jul 5 13:02:57.385: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0705 13:02:57.288764 122 log.go:172] (0xc000118bb0) (0xc000782640) Create stream\nI0705 13:02:57.288818 122 log.go:172] (0xc000118bb0) (0xc000782640) Stream added, broadcasting: 1\nI0705 13:02:57.293566 122 log.go:172] (0xc000118bb0) Reply frame received for 1\nI0705 13:02:57.293598 122 log.go:172] (0xc000118bb0) (0xc000782000) Create stream\nI0705 13:02:57.293606 122 log.go:172] (0xc000118bb0) (0xc000782000) Stream added, broadcasting: 3\nI0705 13:02:57.294568 122 log.go:172] (0xc000118bb0) Reply frame received for 3\nI0705 13:02:57.294604 122 log.go:172] (0xc000118bb0) (0xc0007820a0) Create stream\nI0705 13:02:57.294613 122 log.go:172] (0xc000118bb0) (0xc0007820a0) Stream added, broadcasting: 5\nI0705 13:02:57.295472 122 log.go:172] (0xc000118bb0) Reply frame received for 5\nI0705 13:02:57.295517 122 log.go:172] (0xc000118bb0) (0xc00018c000) Create stream\nI0705 13:02:57.295537 122 log.go:172] (0xc000118bb0) (0xc00018c000) Stream added, broadcasting: 7\nI0705 13:02:57.296463 122 log.go:172] (0xc000118bb0) Reply frame received for 7\nI0705 13:02:57.296567 122 log.go:172] (0xc000782000) (3) Writing data frame\nI0705 13:02:57.296665 122 log.go:172] (0xc000782000) (3) Writing data frame\nI0705 13:02:57.297804 122 log.go:172] (0xc000118bb0) Data frame received for 5\nI0705 13:02:57.297825 122 log.go:172] (0xc0007820a0) (5) Data frame handling\nI0705 13:02:57.297843 122 log.go:172] (0xc0007820a0) (5) Data frame sent\nI0705 13:02:57.298449 122 log.go:172] (0xc000118bb0) Data frame received for 5\nI0705 13:02:57.298467 122 log.go:172] (0xc0007820a0) (5) Data frame handling\nI0705 13:02:57.298481 122 log.go:172] (0xc0007820a0) (5) Data frame sent\nI0705 13:02:57.342987 122 log.go:172] (0xc000118bb0) Data frame received for 5\nI0705 13:02:57.343033 122 log.go:172] (0xc0007820a0) (5) Data frame handling\nI0705 13:02:57.343068 122 log.go:172] (0xc000118bb0) Data frame received for 7\nI0705 13:02:57.343246 122 log.go:172] (0xc00018c000) (7) Data frame handling\nI0705 13:02:57.343383 122 log.go:172] (0xc000118bb0) Data frame received for 1\nI0705 13:02:57.343406 122 log.go:172] (0xc000782640) (1) Data frame handling\nI0705 13:02:57.343436 122 log.go:172] (0xc000782640) (1) Data frame sent\nI0705 13:02:57.343472 122 log.go:172] (0xc000118bb0) (0xc000782000) Stream removed, broadcasting: 3\nI0705 13:02:57.343530 122 log.go:172] (0xc000118bb0) (0xc000782640) Stream removed, broadcasting: 1\nI0705 13:02:57.343580 122 log.go:172] (0xc000118bb0) Go away received\nI0705 13:02:57.343649 122 log.go:172] (0xc000118bb0) (0xc000782640) Stream removed, broadcasting: 1\nI0705 13:02:57.343669 122 log.go:172] (0xc000118bb0) (0xc000782000) Stream removed, broadcasting: 3\nI0705 13:02:57.343681 122 log.go:172] (0xc000118bb0) (0xc0007820a0) Stream removed, broadcasting: 5\nI0705 13:02:57.343693 122 log.go:172] (0xc000118bb0) (0xc00018c000) Stream removed, broadcasting: 7\n" Jul 5 13:02:57.386: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 5 13:02:59.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3685" for this suite. Jul 5 13:03:07.411: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 13:03:07.485: INFO: namespace kubectl-3685 deletion completed in 8.088708262s • [SLOW TEST:13.249 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 5 13:03:07.485: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service multi-endpoint-test in namespace services-6522 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6522 to expose endpoints map[] Jul 5 13:03:07.663: INFO: Get endpoints failed (44.937352ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Jul 5 13:03:08.666: INFO: successfully validated that service multi-endpoint-test in namespace services-6522 exposes endpoints map[] (1.048150935s elapsed) STEP: Creating pod pod1 in namespace services-6522 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6522 to expose endpoints map[pod1:[100]] Jul 5 13:03:12.713: INFO: successfully validated that service multi-endpoint-test in namespace services-6522 exposes endpoints map[pod1:[100]] (4.040770379s elapsed) STEP: Creating pod pod2 in namespace services-6522 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6522 to expose endpoints map[pod1:[100] pod2:[101]] Jul 5 13:03:16.816: INFO: successfully validated that service multi-endpoint-test in namespace services-6522 exposes endpoints map[pod1:[100] pod2:[101]] (4.097977552s elapsed) STEP: Deleting pod pod1 in namespace services-6522 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6522 to expose endpoints map[pod2:[101]] Jul 5 13:03:17.857: INFO: successfully validated that service multi-endpoint-test in namespace services-6522 exposes endpoints map[pod2:[101]] (1.036887762s elapsed) STEP: Deleting pod pod2 in namespace services-6522 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6522 to expose endpoints map[] Jul 5 13:03:18.891: INFO: successfully validated that service multi-endpoint-test in namespace services-6522 exposes endpoints map[] (1.008081759s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 5 13:03:18.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6522" for this suite. Jul 5 13:03:41.055: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 13:03:41.133: INFO: namespace services-6522 deletion completed in 22.10747749s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:33.648 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 5 13:03:41.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-8b1be47e-0bbf-4fec-88ca-2746fac3fba5 STEP: Creating a pod to test consume configMaps Jul 5 13:03:41.254: INFO: Waiting up to 5m0s for pod "pod-configmaps-22a3b2ec-b85b-4504-b601-6b7c57b6fa47" in namespace "configmap-732" to be "success or failure" Jul 5 13:03:41.264: INFO: Pod "pod-configmaps-22a3b2ec-b85b-4504-b601-6b7c57b6fa47": Phase="Pending", Reason="", readiness=false. Elapsed: 9.447905ms Jul 5 13:03:43.275: INFO: Pod "pod-configmaps-22a3b2ec-b85b-4504-b601-6b7c57b6fa47": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020457263s Jul 5 13:03:45.279: INFO: Pod "pod-configmaps-22a3b2ec-b85b-4504-b601-6b7c57b6fa47": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024554662s STEP: Saw pod success Jul 5 13:03:45.279: INFO: Pod "pod-configmaps-22a3b2ec-b85b-4504-b601-6b7c57b6fa47" satisfied condition "success or failure" Jul 5 13:03:45.282: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-22a3b2ec-b85b-4504-b601-6b7c57b6fa47 container configmap-volume-test: STEP: delete the pod Jul 5 13:03:45.360: INFO: Waiting for pod pod-configmaps-22a3b2ec-b85b-4504-b601-6b7c57b6fa47 to disappear Jul 5 13:03:45.366: INFO: Pod pod-configmaps-22a3b2ec-b85b-4504-b601-6b7c57b6fa47 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 5 13:03:45.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-732" for this suite. Jul 5 13:03:51.382: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 13:03:51.456: INFO: namespace configmap-732 deletion completed in 6.086557049s • [SLOW TEST:10.322 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 5 13:03:51.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210 STEP: creating the pod Jul 5 13:03:51.487: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1054' Jul 5 13:03:51.811: INFO: stderr: "" Jul 5 13:03:51.811: INFO: stdout: "pod/pause created\n" Jul 5 13:03:51.811: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Jul 5 13:03:51.811: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-1054" to be "running and ready" Jul 5 13:03:51.815: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 3.685499ms Jul 5 13:03:53.820: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008176331s Jul 5 13:03:55.824: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.012575536s Jul 5 13:03:55.824: INFO: Pod "pause" satisfied condition "running and ready" Jul 5 13:03:55.824: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: adding the label testing-label with value testing-label-value to a pod Jul 5 13:03:55.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-1054' Jul 5 13:03:55.927: INFO: stderr: "" Jul 5 13:03:55.927: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Jul 5 13:03:55.927: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-1054' Jul 5 13:03:56.025: INFO: stderr: "" Jul 5 13:03:56.025: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s testing-label-value\n" STEP: removing the label testing-label of a pod Jul 5 13:03:56.026: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-1054' Jul 5 13:03:56.121: INFO: stderr: "" Jul 5 13:03:56.121: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Jul 5 13:03:56.121: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-1054' Jul 5 13:03:56.207: INFO: stderr: "" Jul 5 13:03:56.207: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217 STEP: using delete to clean up resources Jul 5 13:03:56.207: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1054' Jul 5 13:03:56.357: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 5 13:03:56.357: INFO: stdout: "pod \"pause\" force deleted\n" Jul 5 13:03:56.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-1054' Jul 5 13:03:56.553: INFO: stderr: "No resources found.\n" Jul 5 13:03:56.553: INFO: stdout: "" Jul 5 13:03:56.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-1054 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jul 5 13:03:56.648: INFO: stderr: "" Jul 5 13:03:56.648: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 5 13:03:56.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1054" for this suite. Jul 5 13:04:02.723: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 13:04:02.800: INFO: namespace kubectl-1054 deletion completed in 6.148015053s • [SLOW TEST:11.344 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 5 13:04:02.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-50525688-cf98-47f2-a0a2-6ce910522516 STEP: Creating a pod to test consume configMaps Jul 5 13:04:02.972: INFO: Waiting up to 5m0s for pod "pod-configmaps-8cffe8f3-84d2-48ad-9035-bc6b39378aad" in namespace "configmap-7958" to be "success or failure" Jul 5 13:04:02.977: INFO: Pod "pod-configmaps-8cffe8f3-84d2-48ad-9035-bc6b39378aad": Phase="Pending", Reason="", readiness=false. Elapsed: 5.712555ms Jul 5 13:04:05.017: INFO: Pod "pod-configmaps-8cffe8f3-84d2-48ad-9035-bc6b39378aad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045025633s Jul 5 13:04:07.035: INFO: Pod "pod-configmaps-8cffe8f3-84d2-48ad-9035-bc6b39378aad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.063335371s STEP: Saw pod success Jul 5 13:04:07.035: INFO: Pod "pod-configmaps-8cffe8f3-84d2-48ad-9035-bc6b39378aad" satisfied condition "success or failure" Jul 5 13:04:07.039: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-8cffe8f3-84d2-48ad-9035-bc6b39378aad container configmap-volume-test: STEP: delete the pod Jul 5 13:04:07.092: INFO: Waiting for pod pod-configmaps-8cffe8f3-84d2-48ad-9035-bc6b39378aad to disappear Jul 5 13:04:07.115: INFO: Pod pod-configmaps-8cffe8f3-84d2-48ad-9035-bc6b39378aad no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 5 13:04:07.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7958" for this suite. Jul 5 13:04:13.131: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 13:04:13.231: INFO: namespace configmap-7958 deletion completed in 6.111575513s • [SLOW TEST:10.431 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 5 13:04:13.232: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jul 5 13:04:13.324: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-4239' Jul 5 13:04:13.425: INFO: stderr: "" Jul 5 13:04:13.425: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690 Jul 5 13:04:13.472: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-4239' Jul 5 13:04:25.911: INFO: stderr: "" Jul 5 13:04:25.912: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 5 13:04:25.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4239" for this suite. Jul 5 13:04:31.927: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 13:04:32.006: INFO: namespace kubectl-4239 deletion completed in 6.091320487s • [SLOW TEST:18.775 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 5 13:04:32.007: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-e921eca9-79ed-4520-84ac-34ac7d15ade0 in namespace container-probe-7442 Jul 5 13:04:36.149: INFO: Started pod liveness-e921eca9-79ed-4520-84ac-34ac7d15ade0 in namespace container-probe-7442 STEP: checking the pod's current state and verifying that restartCount is present Jul 5 13:04:36.152: INFO: Initial restart count of pod liveness-e921eca9-79ed-4520-84ac-34ac7d15ade0 is 0 Jul 5 13:05:00.203: INFO: Restart count of pod container-probe-7442/liveness-e921eca9-79ed-4520-84ac-34ac7d15ade0 is now 1 (24.051373244s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 5 13:05:00.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7442" for this suite. Jul 5 13:05:06.290: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 13:05:06.370: INFO: namespace container-probe-7442 deletion completed in 6.133216684s • [SLOW TEST:34.363 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 5 13:05:06.370: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on tmpfs Jul 5 13:05:06.456: INFO: Waiting up to 5m0s for pod "pod-362bd92e-1a44-4922-90ae-384d435105ce" in namespace "emptydir-1808" to be "success or failure" Jul 5 13:05:06.464: INFO: Pod "pod-362bd92e-1a44-4922-90ae-384d435105ce": Phase="Pending", Reason="", readiness=false. Elapsed: 7.363692ms Jul 5 13:05:08.467: INFO: Pod "pod-362bd92e-1a44-4922-90ae-384d435105ce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010842232s Jul 5 13:05:10.471: INFO: Pod "pod-362bd92e-1a44-4922-90ae-384d435105ce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014851693s STEP: Saw pod success Jul 5 13:05:10.471: INFO: Pod "pod-362bd92e-1a44-4922-90ae-384d435105ce" satisfied condition "success or failure" Jul 5 13:05:10.475: INFO: Trying to get logs from node iruya-worker2 pod pod-362bd92e-1a44-4922-90ae-384d435105ce container test-container: STEP: delete the pod Jul 5 13:05:10.510: INFO: Waiting for pod pod-362bd92e-1a44-4922-90ae-384d435105ce to disappear Jul 5 13:05:10.512: INFO: Pod pod-362bd92e-1a44-4922-90ae-384d435105ce no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 5 13:05:10.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1808" for this suite. Jul 5 13:05:16.542: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 13:05:16.622: INFO: namespace emptydir-1808 deletion completed in 6.106532262s • [SLOW TEST:10.252 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 5 13:05:16.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token STEP: reading a file in the container Jul 5 13:05:21.252: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-5296 pod-service-account-dc74cf0a-0a06-440a-b79f-ca565ce858a2 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Jul 5 13:05:21.482: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-5296 pod-service-account-dc74cf0a-0a06-440a-b79f-ca565ce858a2 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Jul 5 13:05:21.683: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-5296 pod-service-account-dc74cf0a-0a06-440a-b79f-ca565ce858a2 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 5 13:05:21.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-5296" for this suite. Jul 5 13:05:27.922: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 13:05:27.999: INFO: namespace svcaccounts-5296 deletion completed in 6.099060011s • [SLOW TEST:11.377 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 5 13:05:27.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating all guestbook components Jul 5 13:05:28.063: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Jul 5 13:05:28.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4685' Jul 5 13:05:28.349: INFO: stderr: "" Jul 5 13:05:28.349: INFO: stdout: "service/redis-slave created\n" Jul 5 13:05:28.349: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Jul 5 13:05:28.349: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4685' Jul 5 13:05:28.649: INFO: stderr: "" Jul 5 13:05:28.649: INFO: stdout: "service/redis-master created\n" Jul 5 13:05:28.649: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Jul 5 13:05:28.649: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4685' Jul 5 13:05:28.913: INFO: stderr: "" Jul 5 13:05:28.913: INFO: stdout: "service/frontend created\n" Jul 5 13:05:28.913: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Jul 5 13:05:28.913: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4685' Jul 5 13:05:29.173: INFO: stderr: "" Jul 5 13:05:29.173: INFO: stdout: "deployment.apps/frontend created\n" Jul 5 13:05:29.174: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-master spec: replicas: 1 selector: matchLabels: app: redis role: master tier: backend template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jul 5 13:05:29.174: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4685' Jul 5 13:05:29.858: INFO: stderr: "" Jul 5 13:05:29.858: INFO: stdout: "deployment.apps/redis-master created\n" Jul 5 13:05:29.858: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 selector: matchLabels: app: redis role: slave tier: backend template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Jul 5 13:05:29.858: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4685' Jul 5 13:05:30.506: INFO: stderr: "" Jul 5 13:05:30.506: INFO: stdout: "deployment.apps/redis-slave created\n" STEP: validating guestbook app Jul 5 13:05:30.506: INFO: Waiting for all frontend pods to be Running. Jul 5 13:05:40.557: INFO: Waiting for frontend to serve content. Jul 5 13:05:40.583: INFO: Trying to add a new entry to the guestbook. Jul 5 13:05:40.601: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Jul 5 13:05:40.615: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4685' Jul 5 13:05:40.776: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 5 13:05:40.776: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Jul 5 13:05:40.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4685' Jul 5 13:05:40.907: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 5 13:05:40.907: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Jul 5 13:05:40.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4685' Jul 5 13:05:41.032: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 5 13:05:41.032: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Jul 5 13:05:41.032: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4685' Jul 5 13:05:41.139: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 5 13:05:41.139: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Jul 5 13:05:41.139: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4685' Jul 5 13:05:41.247: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 5 13:05:41.247: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n" STEP: using delete to clean up resources Jul 5 13:05:41.247: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4685' Jul 5 13:05:41.375: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 5 13:05:41.375: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 5 13:05:41.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4685" for this suite. Jul 5 13:06:19.482: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 13:06:19.597: INFO: namespace kubectl-4685 deletion completed in 38.209093162s • [SLOW TEST:51.598 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 5 13:06:19.597: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-5281 STEP: creating a selector STEP: Creating the service pods in kubernetes Jul 5 13:06:19.645: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jul 5 13:06:45.859: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.105 8081 | grep -v '^\s*$'] Namespace:pod-network-test-5281 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 5 13:06:45.859: INFO: >>> kubeConfig: /root/.kube/config I0705 13:06:45.891637 6 log.go:172] (0xc0013f8580) (0xc001e340a0) Create stream I0705 13:06:45.891672 6 log.go:172] (0xc0013f8580) (0xc001e340a0) Stream added, broadcasting: 1 I0705 13:06:45.893262 6 log.go:172] (0xc0013f8580) Reply frame received for 1 I0705 13:06:45.893295 6 log.go:172] (0xc0013f8580) (0xc002890960) Create stream I0705 13:06:45.893306 6 log.go:172] (0xc0013f8580) (0xc002890960) Stream added, broadcasting: 3 I0705 13:06:45.894166 6 log.go:172] (0xc0013f8580) Reply frame received for 3 I0705 13:06:45.894184 6 log.go:172] (0xc0013f8580) (0xc00211c8c0) Create stream I0705 13:06:45.894195 6 log.go:172] (0xc0013f8580) (0xc00211c8c0) Stream added, broadcasting: 5 I0705 13:06:45.894922 6 log.go:172] (0xc0013f8580) Reply frame received for 5 I0705 13:06:47.009485 6 log.go:172] (0xc0013f8580) Data frame received for 5 I0705 13:06:47.009532 6 log.go:172] (0xc00211c8c0) (5) Data frame handling I0705 13:06:47.009567 6 log.go:172] (0xc0013f8580) Data frame received for 3 I0705 13:06:47.009582 6 log.go:172] (0xc002890960) (3) Data frame handling I0705 13:06:47.009601 6 log.go:172] (0xc002890960) (3) Data frame sent I0705 13:06:47.009613 6 log.go:172] (0xc0013f8580) Data frame received for 3 I0705 13:06:47.009624 6 log.go:172] (0xc002890960) (3) Data frame handling I0705 13:06:47.011531 6 log.go:172] (0xc0013f8580) Data frame received for 1 I0705 13:06:47.011553 6 log.go:172] (0xc001e340a0) (1) Data frame handling I0705 13:06:47.011580 6 log.go:172] (0xc001e340a0) (1) Data frame sent I0705 13:06:47.011596 6 log.go:172] (0xc0013f8580) (0xc001e340a0) Stream removed, broadcasting: 1 I0705 13:06:47.011615 6 log.go:172] (0xc0013f8580) Go away received I0705 13:06:47.011945 6 log.go:172] (0xc0013f8580) (0xc001e340a0) Stream removed, broadcasting: 1 I0705 13:06:47.011963 6 log.go:172] (0xc0013f8580) (0xc002890960) Stream removed, broadcasting: 3 I0705 13:06:47.011975 6 log.go:172] (0xc0013f8580) (0xc00211c8c0) Stream removed, broadcasting: 5 Jul 5 13:06:47.011: INFO: Found all expected endpoints: [netserver-0] Jul 5 13:06:47.015: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.58 8081 | grep -v '^\s*$'] Namespace:pod-network-test-5281 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 5 13:06:47.015: INFO: >>> kubeConfig: /root/.kube/config I0705 13:06:47.042197 6 log.go:172] (0xc0009e8840) (0xc001247900) Create stream I0705 13:06:47.042231 6 log.go:172] (0xc0009e8840) (0xc001247900) Stream added, broadcasting: 1 I0705 13:06:47.047439 6 log.go:172] (0xc0009e8840) Reply frame received for 1 I0705 13:06:47.047488 6 log.go:172] (0xc0009e8840) (0xc002890a00) Create stream I0705 13:06:47.047503 6 log.go:172] (0xc0009e8840) (0xc002890a00) Stream added, broadcasting: 3 I0705 13:06:47.048615 6 log.go:172] (0xc0009e8840) Reply frame received for 3 I0705 13:06:47.048662 6 log.go:172] (0xc0009e8840) (0xc00211c960) Create stream I0705 13:06:47.048689 6 log.go:172] (0xc0009e8840) (0xc00211c960) Stream added, broadcasting: 5 I0705 13:06:47.049898 6 log.go:172] (0xc0009e8840) Reply frame received for 5 I0705 13:06:48.142417 6 log.go:172] (0xc0009e8840) Data frame received for 3 I0705 13:06:48.142467 6 log.go:172] (0xc002890a00) (3) Data frame handling I0705 13:06:48.142482 6 log.go:172] (0xc002890a00) (3) Data frame sent I0705 13:06:48.142494 6 log.go:172] (0xc0009e8840) Data frame received for 3 I0705 13:06:48.142515 6 log.go:172] (0xc002890a00) (3) Data frame handling I0705 13:06:48.142991 6 log.go:172] (0xc0009e8840) Data frame received for 5 I0705 13:06:48.143031 6 log.go:172] (0xc00211c960) (5) Data frame handling I0705 13:06:48.145388 6 log.go:172] (0xc0009e8840) Data frame received for 1 I0705 13:06:48.145411 6 log.go:172] (0xc001247900) (1) Data frame handling I0705 13:06:48.145424 6 log.go:172] (0xc001247900) (1) Data frame sent I0705 13:06:48.145731 6 log.go:172] (0xc0009e8840) (0xc001247900) Stream removed, broadcasting: 1 I0705 13:06:48.145879 6 log.go:172] (0xc0009e8840) (0xc001247900) Stream removed, broadcasting: 1 I0705 13:06:48.145909 6 log.go:172] (0xc0009e8840) (0xc002890a00) Stream removed, broadcasting: 3 I0705 13:06:48.146149 6 log.go:172] (0xc0009e8840) (0xc00211c960) Stream removed, broadcasting: 5 Jul 5 13:06:48.146: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 5 13:06:48.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready I0705 13:06:48.146367 6 log.go:172] (0xc0009e8840) Go away received STEP: Destroying namespace "pod-network-test-5281" for this suite. Jul 5 13:07:12.166: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 13:07:12.242: INFO: namespace pod-network-test-5281 deletion completed in 24.090875538s • [SLOW TEST:52.645 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 5 13:07:12.243: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 5 13:07:16.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7722" for this suite. Jul 5 13:07:54.407: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 13:07:54.485: INFO: namespace kubelet-test-7722 deletion completed in 38.106238634s • [SLOW TEST:42.243 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 5 13:07:54.485: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-2307 STEP: creating a selector STEP: Creating the service pods in kubernetes Jul 5 13:07:54.579: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jul 5 13:08:20.757: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.106:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2307 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 5 13:08:20.757: INFO: >>> kubeConfig: /root/.kube/config I0705 13:08:20.793922 6 log.go:172] (0xc000539130) (0xc0003af220) Create stream I0705 13:08:20.793956 6 log.go:172] (0xc000539130) (0xc0003af220) Stream added, broadcasting: 1 I0705 13:08:20.795913 6 log.go:172] (0xc000539130) Reply frame received for 1 I0705 13:08:20.795973 6 log.go:172] (0xc000539130) (0xc0002f2780) Create stream I0705 13:08:20.795984 6 log.go:172] (0xc000539130) (0xc0002f2780) Stream added, broadcasting: 3 I0705 13:08:20.796957 6 log.go:172] (0xc000539130) Reply frame received for 3 I0705 13:08:20.796997 6 log.go:172] (0xc000539130) (0xc0003af5e0) Create stream I0705 13:08:20.797010 6 log.go:172] (0xc000539130) (0xc0003af5e0) Stream added, broadcasting: 5 I0705 13:08:20.798314 6 log.go:172] (0xc000539130) Reply frame received for 5 I0705 13:08:20.897909 6 log.go:172] (0xc000539130) Data frame received for 3 I0705 13:08:20.898001 6 log.go:172] (0xc0002f2780) (3) Data frame handling I0705 13:08:20.898036 6 log.go:172] (0xc0002f2780) (3) Data frame sent I0705 13:08:20.898254 6 log.go:172] (0xc000539130) Data frame received for 5 I0705 13:08:20.898316 6 log.go:172] (0xc0003af5e0) (5) Data frame handling I0705 13:08:20.898557 6 log.go:172] (0xc000539130) Data frame received for 3 I0705 13:08:20.898578 6 log.go:172] (0xc0002f2780) (3) Data frame handling I0705 13:08:20.901624 6 log.go:172] (0xc000539130) Data frame received for 1 I0705 13:08:20.901671 6 log.go:172] (0xc0003af220) (1) Data frame handling I0705 13:08:20.901767 6 log.go:172] (0xc0003af220) (1) Data frame sent I0705 13:08:20.901837 6 log.go:172] (0xc000539130) (0xc0003af220) Stream removed, broadcasting: 1 I0705 13:08:20.901913 6 log.go:172] (0xc000539130) Go away received I0705 13:08:20.902161 6 log.go:172] (0xc000539130) (0xc0003af220) Stream removed, broadcasting: 1 I0705 13:08:20.902191 6 log.go:172] (0xc000539130) (0xc0002f2780) Stream removed, broadcasting: 3 I0705 13:08:20.902202 6 log.go:172] (0xc000539130) (0xc0003af5e0) Stream removed, broadcasting: 5 Jul 5 13:08:20.902: INFO: Found all expected endpoints: [netserver-0] Jul 5 13:08:20.905: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.61:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2307 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 5 13:08:20.905: INFO: >>> kubeConfig: /root/.kube/config I0705 13:08:20.931598 6 log.go:172] (0xc000c034a0) (0xc0002f2e60) Create stream I0705 13:08:20.931626 6 log.go:172] (0xc000c034a0) (0xc0002f2e60) Stream added, broadcasting: 1 I0705 13:08:20.933795 6 log.go:172] (0xc000c034a0) Reply frame received for 1 I0705 13:08:20.933834 6 log.go:172] (0xc000c034a0) (0xc0003a9d60) Create stream I0705 13:08:20.933847 6 log.go:172] (0xc000c034a0) (0xc0003a9d60) Stream added, broadcasting: 3 I0705 13:08:20.934727 6 log.go:172] (0xc000c034a0) Reply frame received for 3 I0705 13:08:20.934762 6 log.go:172] (0xc000c034a0) (0xc000fd8780) Create stream I0705 13:08:20.934774 6 log.go:172] (0xc000c034a0) (0xc000fd8780) Stream added, broadcasting: 5 I0705 13:08:20.935534 6 log.go:172] (0xc000c034a0) Reply frame received for 5 I0705 13:08:21.002922 6 log.go:172] (0xc000c034a0) Data frame received for 3 I0705 13:08:21.002963 6 log.go:172] (0xc0003a9d60) (3) Data frame handling I0705 13:08:21.002989 6 log.go:172] (0xc0003a9d60) (3) Data frame sent I0705 13:08:21.003000 6 log.go:172] (0xc000c034a0) Data frame received for 3 I0705 13:08:21.003005 6 log.go:172] (0xc0003a9d60) (3) Data frame handling I0705 13:08:21.003262 6 log.go:172] (0xc000c034a0) Data frame received for 5 I0705 13:08:21.003298 6 log.go:172] (0xc000fd8780) (5) Data frame handling I0705 13:08:21.005478 6 log.go:172] (0xc000c034a0) Data frame received for 1 I0705 13:08:21.005506 6 log.go:172] (0xc0002f2e60) (1) Data frame handling I0705 13:08:21.005529 6 log.go:172] (0xc0002f2e60) (1) Data frame sent I0705 13:08:21.005545 6 log.go:172] (0xc000c034a0) (0xc0002f2e60) Stream removed, broadcasting: 1 I0705 13:08:21.005563 6 log.go:172] (0xc000c034a0) Go away received I0705 13:08:21.005673 6 log.go:172] (0xc000c034a0) (0xc0002f2e60) Stream removed, broadcasting: 1 I0705 13:08:21.005703 6 log.go:172] (0xc000c034a0) (0xc0003a9d60) Stream removed, broadcasting: 3 I0705 13:08:21.005724 6 log.go:172] (0xc000c034a0) (0xc000fd8780) Stream removed, broadcasting: 5 Jul 5 13:08:21.005: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 5 13:08:21.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2307" for this suite. Jul 5 13:08:45.021: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 13:08:45.091: INFO: namespace pod-network-test-2307 deletion completed in 24.081796621s • [SLOW TEST:50.606 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 5 13:08:45.092: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Jul 5 13:08:45.235: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-7710,SelfLink:/api/v1/namespaces/watch-7710/configmaps/e2e-watch-test-resource-version,UID:41d9f7ce-5521-4a78-a32c-1d6ea7cf4964,ResourceVersion:230556,Generation:0,CreationTimestamp:2020-07-05 13:08:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jul 5 13:08:45.235: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-7710,SelfLink:/api/v1/namespaces/watch-7710/configmaps/e2e-watch-test-resource-version,UID:41d9f7ce-5521-4a78-a32c-1d6ea7cf4964,ResourceVersion:230557,Generation:0,CreationTimestamp:2020-07-05 13:08:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 5 13:08:45.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7710" for this suite. Jul 5 13:08:51.253: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 13:08:51.336: INFO: namespace watch-7710 deletion completed in 6.092694845s • [SLOW TEST:6.244 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 5 13:08:51.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Jul 5 13:08:51.449: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-6967,SelfLink:/api/v1/namespaces/watch-6967/configmaps/e2e-watch-test-watch-closed,UID:63f0f7cb-2a77-472b-93a7-378a660db332,ResourceVersion:230578,Generation:0,CreationTimestamp:2020-07-05 13:08:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Jul 5 13:08:51.449: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-6967,SelfLink:/api/v1/namespaces/watch-6967/configmaps/e2e-watch-test-watch-closed,UID:63f0f7cb-2a77-472b-93a7-378a660db332,ResourceVersion:230579,Generation:0,CreationTimestamp:2020-07-05 13:08:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Jul 5 13:08:51.474: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-6967,SelfLink:/api/v1/namespaces/watch-6967/configmaps/e2e-watch-test-watch-closed,UID:63f0f7cb-2a77-472b-93a7-378a660db332,ResourceVersion:230580,Generation:0,CreationTimestamp:2020-07-05 13:08:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jul 5 13:08:51.474: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-6967,SelfLink:/api/v1/namespaces/watch-6967/configmaps/e2e-watch-test-watch-closed,UID:63f0f7cb-2a77-472b-93a7-378a660db332,ResourceVersion:230581,Generation:0,CreationTimestamp:2020-07-05 13:08:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 5 13:08:51.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6967" for this suite. Jul 5 13:08:57.528: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 13:08:57.603: INFO: namespace watch-6967 deletion completed in 6.091727652s • [SLOW TEST:6.267 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 5 13:08:57.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jul 5 13:09:02.201: INFO: Successfully updated pod "pod-update-activedeadlineseconds-60e65959-f4f3-4bcc-81e5-22f93a27fd63" Jul 5 13:09:02.201: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-60e65959-f4f3-4bcc-81e5-22f93a27fd63" in namespace "pods-9215" to be "terminated due to deadline exceeded" Jul 5 13:09:02.210: INFO: Pod "pod-update-activedeadlineseconds-60e65959-f4f3-4bcc-81e5-22f93a27fd63": Phase="Running", Reason="", readiness=true. Elapsed: 8.420879ms Jul 5 13:09:04.214: INFO: Pod "pod-update-activedeadlineseconds-60e65959-f4f3-4bcc-81e5-22f93a27fd63": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.012710079s Jul 5 13:09:04.214: INFO: Pod "pod-update-activedeadlineseconds-60e65959-f4f3-4bcc-81e5-22f93a27fd63" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 5 13:09:04.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9215" for this suite. Jul 5 13:09:10.232: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 13:09:10.311: INFO: namespace pods-9215 deletion completed in 6.09304791s • [SLOW TEST:12.707 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 5 13:09:10.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jul 5 13:09:14.929: INFO: Successfully updated pod "pod-update-47e7a201-826a-48fc-bbbd-36d32e6edced" STEP: verifying the updated pod is in kubernetes Jul 5 13:09:14.943: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 5 13:09:14.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2141" for this suite. Jul 5 13:09:36.973: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 13:09:37.049: INFO: namespace pods-2141 deletion completed in 22.102588673s • [SLOW TEST:26.737 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 5 13:09:37.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name secret-emptykey-test-7e5236b3-9fc3-4139-a43d-f20d45e2cba1 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 5 13:09:37.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3642" for this suite. Jul 5 13:09:43.148: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 13:09:43.224: INFO: namespace secrets-3642 deletion completed in 6.111647424s • [SLOW TEST:6.174 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 5 13:09:43.224: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jul 5 13:09:43.296: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ae1e14da-b1dc-4566-b13b-19bf45f2db9b" in namespace "downward-api-1507" to be "success or failure" Jul 5 13:09:43.310: INFO: Pod "downwardapi-volume-ae1e14da-b1dc-4566-b13b-19bf45f2db9b": Phase="Pending", Reason="", readiness=false. Elapsed: 13.644883ms Jul 5 13:09:45.337: INFO: Pod "downwardapi-volume-ae1e14da-b1dc-4566-b13b-19bf45f2db9b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041249734s Jul 5 13:09:47.342: INFO: Pod "downwardapi-volume-ae1e14da-b1dc-4566-b13b-19bf45f2db9b": Phase="Running", Reason="", readiness=true. Elapsed: 4.045527101s Jul 5 13:09:49.346: INFO: Pod "downwardapi-volume-ae1e14da-b1dc-4566-b13b-19bf45f2db9b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.049643836s STEP: Saw pod success Jul 5 13:09:49.346: INFO: Pod "downwardapi-volume-ae1e14da-b1dc-4566-b13b-19bf45f2db9b" satisfied condition "success or failure" Jul 5 13:09:49.349: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-ae1e14da-b1dc-4566-b13b-19bf45f2db9b container client-container: STEP: delete the pod Jul 5 13:09:49.417: INFO: Waiting for pod downwardapi-volume-ae1e14da-b1dc-4566-b13b-19bf45f2db9b to disappear Jul 5 13:09:49.420: INFO: Pod downwardapi-volume-ae1e14da-b1dc-4566-b13b-19bf45f2db9b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 5 13:09:49.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1507" for this suite. Jul 5 13:09:55.435: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 13:09:55.518: INFO: namespace downward-api-1507 deletion completed in 6.094563277s • [SLOW TEST:12.295 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 5 13:09:55.519: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jul 5 13:09:55.581: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Jul 5 13:09:55.750: INFO: stderr: "" Jul 5 13:09:55.750: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.12\", GitCommit:\"e2a822d9f3c2fdb5c9bfbe64313cf9f657f0a725\", GitTreeState:\"clean\", BuildDate:\"2020-07-05T08:29:36Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.11\", GitCommit:\"d94a81c724ea8e1ccc9002d89b7fe81d58f89ede\", GitTreeState:\"clean\", BuildDate:\"2020-05-01T02:31:02Z\", GoVersion:\"go1.12.17\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 5 13:09:55.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6879" for this suite. Jul 5 13:10:01.784: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 13:10:01.864: INFO: namespace kubectl-6879 deletion completed in 6.108541383s • [SLOW TEST:6.345 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 5 13:10:01.865: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Jul 5 13:10:01.924: INFO: Waiting up to 5m0s for pod "downward-api-16282156-872a-4c14-8ae5-365a77ca35a7" in namespace "downward-api-6028" to be "success or failure" Jul 5 13:10:01.943: INFO: Pod "downward-api-16282156-872a-4c14-8ae5-365a77ca35a7": Phase="Pending", Reason="", readiness=false. Elapsed: 17.985976ms Jul 5 13:10:03.947: INFO: Pod "downward-api-16282156-872a-4c14-8ae5-365a77ca35a7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022369616s Jul 5 13:10:05.978: INFO: Pod "downward-api-16282156-872a-4c14-8ae5-365a77ca35a7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.05371864s STEP: Saw pod success Jul 5 13:10:05.978: INFO: Pod "downward-api-16282156-872a-4c14-8ae5-365a77ca35a7" satisfied condition "success or failure" Jul 5 13:10:05.982: INFO: Trying to get logs from node iruya-worker2 pod downward-api-16282156-872a-4c14-8ae5-365a77ca35a7 container dapi-container: STEP: delete the pod Jul 5 13:10:06.106: INFO: Waiting for pod downward-api-16282156-872a-4c14-8ae5-365a77ca35a7 to disappear Jul 5 13:10:06.112: INFO: Pod downward-api-16282156-872a-4c14-8ae5-365a77ca35a7 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 5 13:10:06.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6028" for this suite. Jul 5 13:10:12.134: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 13:10:12.208: INFO: namespace downward-api-6028 deletion completed in 6.093297026s • [SLOW TEST:10.343 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 5 13:10:12.208: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jul 5 13:10:12.295: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1a1dc369-c362-4140-a302-6a92a113e2d2" in namespace "projected-5811" to be "success or failure" Jul 5 13:10:12.315: INFO: Pod "downwardapi-volume-1a1dc369-c362-4140-a302-6a92a113e2d2": Phase="Pending", Reason="", readiness=false. Elapsed: 19.566926ms Jul 5 13:10:14.335: INFO: Pod "downwardapi-volume-1a1dc369-c362-4140-a302-6a92a113e2d2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039527123s Jul 5 13:10:16.339: INFO: Pod "downwardapi-volume-1a1dc369-c362-4140-a302-6a92a113e2d2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043611724s STEP: Saw pod success Jul 5 13:10:16.339: INFO: Pod "downwardapi-volume-1a1dc369-c362-4140-a302-6a92a113e2d2" satisfied condition "success or failure" Jul 5 13:10:16.342: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-1a1dc369-c362-4140-a302-6a92a113e2d2 container client-container: STEP: delete the pod Jul 5 13:10:16.390: INFO: Waiting for pod downwardapi-volume-1a1dc369-c362-4140-a302-6a92a113e2d2 to disappear Jul 5 13:10:16.412: INFO: Pod downwardapi-volume-1a1dc369-c362-4140-a302-6a92a113e2d2 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 5 13:10:16.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5811" for this suite. Jul 5 13:10:22.427: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 13:10:22.518: INFO: namespace projected-5811 deletion completed in 6.100958109s • [SLOW TEST:10.310 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 5 13:10:22.518: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on node default medium Jul 5 13:10:22.600: INFO: Waiting up to 5m0s for pod "pod-074df2a0-8e6e-418e-991c-43a4d8426694" in namespace "emptydir-8696" to be "success or failure" Jul 5 13:10:22.603: INFO: Pod "pod-074df2a0-8e6e-418e-991c-43a4d8426694": Phase="Pending", Reason="", readiness=false. Elapsed: 2.895655ms Jul 5 13:10:24.639: INFO: Pod "pod-074df2a0-8e6e-418e-991c-43a4d8426694": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038858381s Jul 5 13:10:26.643: INFO: Pod "pod-074df2a0-8e6e-418e-991c-43a4d8426694": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042857183s STEP: Saw pod success Jul 5 13:10:26.643: INFO: Pod "pod-074df2a0-8e6e-418e-991c-43a4d8426694" satisfied condition "success or failure" Jul 5 13:10:26.647: INFO: Trying to get logs from node iruya-worker2 pod pod-074df2a0-8e6e-418e-991c-43a4d8426694 container test-container: STEP: delete the pod Jul 5 13:10:26.812: INFO: Waiting for pod pod-074df2a0-8e6e-418e-991c-43a4d8426694 to disappear Jul 5 13:10:26.900: INFO: Pod pod-074df2a0-8e6e-418e-991c-43a4d8426694 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 5 13:10:26.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8696" for this suite. Jul 5 13:10:32.934: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 13:10:33.008: INFO: namespace emptydir-8696 deletion completed in 6.104395214s • [SLOW TEST:10.490 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 5 13:10:33.009: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Jul 5 13:10:37.709: INFO: Successfully updated pod "annotationupdate91c0b064-c173-4fdd-874d-2a8540628e90" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 5 13:10:39.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-175" for this suite. Jul 5 13:11:01.749: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 13:11:01.823: INFO: namespace downward-api-175 deletion completed in 22.092591407s • [SLOW TEST:28.814 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 5 13:11:01.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jul 5 13:11:01.904: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Jul 5 13:11:06.909: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jul 5 13:11:06.909: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Jul 5 13:11:10.983: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-5164,SelfLink:/apis/apps/v1/namespaces/deployment-5164/deployments/test-cleanup-deployment,UID:d9d553a6-9f19-4163-8adf-f6cbcbe1c6eb,ResourceVersion:231094,Generation:1,CreationTimestamp:2020-07-05 13:11:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 1,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-07-05 13:11:07 +0000 UTC 2020-07-05 13:11:07 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-07-05 13:11:10 +0000 UTC 2020-07-05 13:11:06 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-cleanup-deployment-55bbcbc84c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Jul 5 13:11:10.986: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-5164,SelfLink:/apis/apps/v1/namespaces/deployment-5164/replicasets/test-cleanup-deployment-55bbcbc84c,UID:19f76556-0d65-45ca-b963-fc0689d9a070,ResourceVersion:231083,Generation:1,CreationTimestamp:2020-07-05 13:11:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment d9d553a6-9f19-4163-8adf-f6cbcbe1c6eb 0xc0030c9b37 0xc0030c9b38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jul 5 13:11:10.990: INFO: Pod "test-cleanup-deployment-55bbcbc84c-xhw9n" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-xhw9n,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-5164,SelfLink:/api/v1/namespaces/deployment-5164/pods/test-cleanup-deployment-55bbcbc84c-xhw9n,UID:93b56320-6643-4ca9-b481-d5167282e125,ResourceVersion:231082,Generation:0,CreationTimestamp:2020-07-05 13:11:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c 19f76556-0d65-45ca-b963-fc0689d9a070 0xc0026e0387 0xc0026e0388}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-wz86r {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-wz86r,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-wz86r true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026e0400} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026e0420}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 13:11:07 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 13:11:10 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 13:11:10 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 13:11:06 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.111,StartTime:2020-07-05 13:11:07 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-07-05 13:11:09 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://2675946b99283b0b592c1f8924149f260136aa0837cc4f03c8e8802646017c4d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 5 13:11:10.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5164" for this suite. Jul 5 13:11:17.073: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 13:11:17.147: INFO: namespace deployment-5164 deletion completed in 6.154352021s • [SLOW TEST:15.323 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 5 13:11:17.147: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Jul 5 13:11:17.244: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 5 13:11:24.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7499" for this suite. Jul 5 13:11:30.719: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 13:11:30.811: INFO: namespace init-container-7499 deletion completed in 6.168433211s • [SLOW TEST:13.663 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 5 13:11:30.811: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jul 5 13:11:30.893: INFO: Pod name rollover-pod: Found 0 pods out of 1 Jul 5 13:11:35.899: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jul 5 13:11:35.899: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Jul 5 13:11:37.902: INFO: Creating deployment "test-rollover-deployment" Jul 5 13:11:37.910: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Jul 5 13:11:39.917: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Jul 5 13:11:39.923: INFO: Ensure that both replica sets have 1 created replica Jul 5 13:11:39.927: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Jul 5 13:11:39.933: INFO: Updating deployment test-rollover-deployment Jul 5 13:11:39.933: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Jul 5 13:11:41.948: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Jul 5 13:11:41.954: INFO: Make sure deployment "test-rollover-deployment" is complete Jul 5 13:11:41.959: INFO: all replica sets need to contain the pod-template-hash label Jul 5 13:11:41.959: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729551497, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729551497, loc:(*time.Location)(0x7eb18c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729551500, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729551497, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 5 13:11:43.967: INFO: all replica sets need to contain the pod-template-hash label Jul 5 13:11:43.967: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729551497, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729551497, loc:(*time.Location)(0x7eb18c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729551502, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729551497, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 5 13:11:45.967: INFO: all replica sets need to contain the pod-template-hash label Jul 5 13:11:45.967: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729551497, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729551497, loc:(*time.Location)(0x7eb18c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729551502, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729551497, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 5 13:11:47.968: INFO: all replica sets need to contain the pod-template-hash label Jul 5 13:11:47.968: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729551497, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729551497, loc:(*time.Location)(0x7eb18c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729551502, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729551497, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 5 13:11:49.968: INFO: all replica sets need to contain the pod-template-hash label Jul 5 13:11:49.968: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729551497, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729551497, loc:(*time.Location)(0x7eb18c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729551502, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729551497, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 5 13:11:51.968: INFO: all replica sets need to contain the pod-template-hash label Jul 5 13:11:51.968: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729551497, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729551497, loc:(*time.Location)(0x7eb18c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729551502, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729551497, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 5 13:11:53.967: INFO: Jul 5 13:11:53.967: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Jul 5 13:11:53.975: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-112,SelfLink:/apis/apps/v1/namespaces/deployment-112/deployments/test-rollover-deployment,UID:e61c8649-e16a-4c88-ad24-0f8ed32bdbb8,ResourceVersion:231316,Generation:2,CreationTimestamp:2020-07-05 13:11:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-07-05 13:11:37 +0000 UTC 2020-07-05 13:11:37 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-07-05 13:11:53 +0000 UTC 2020-07-05 13:11:37 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Jul 5 13:11:53.979: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-112,SelfLink:/apis/apps/v1/namespaces/deployment-112/replicasets/test-rollover-deployment-854595fc44,UID:c3e355a5-e1c8-4dbc-9e52-6df0d82d4ead,ResourceVersion:231305,Generation:2,CreationTimestamp:2020-07-05 13:11:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment e61c8649-e16a-4c88-ad24-0f8ed32bdbb8 0xc002cc56f7 0xc002cc56f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jul 5 13:11:53.979: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Jul 5 13:11:53.979: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-112,SelfLink:/apis/apps/v1/namespaces/deployment-112/replicasets/test-rollover-controller,UID:2385dad6-a071-4a82-a322-bed04cf4f7a5,ResourceVersion:231314,Generation:2,CreationTimestamp:2020-07-05 13:11:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment e61c8649-e16a-4c88-ad24-0f8ed32bdbb8 0xc002cc5627 0xc002cc5628}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jul 5 13:11:53.979: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-112,SelfLink:/apis/apps/v1/namespaces/deployment-112/replicasets/test-rollover-deployment-9b8b997cf,UID:54c9a982-5cae-4f2c-8447-5f496ae05bd4,ResourceVersion:231270,Generation:2,CreationTimestamp:2020-07-05 13:11:37 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment e61c8649-e16a-4c88-ad24-0f8ed32bdbb8 0xc002cc57c0 0xc002cc57c1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jul 5 13:11:53.982: INFO: Pod "test-rollover-deployment-854595fc44-mmw9f" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-mmw9f,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-112,SelfLink:/api/v1/namespaces/deployment-112/pods/test-rollover-deployment-854595fc44-mmw9f,UID:1172fae1-0047-4a26-899c-63a334625294,ResourceVersion:231282,Generation:0,CreationTimestamp:2020-07-05 13:11:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 c3e355a5-e1c8-4dbc-9e52-6df0d82d4ead 0xc002d7ab67 0xc002d7ab68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-pwrtr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pwrtr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-pwrtr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d7abe0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d7ac00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 13:11:40 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 13:11:42 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 13:11:42 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 13:11:40 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.113,StartTime:2020-07-05 13:11:40 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-07-05 13:11:42 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://7960717e84620f94755fb71ab3db1881671d1fe1899699f863c8e4110422959f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 5 13:11:53.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-112" for this suite. Jul 5 13:12:02.004: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 13:12:02.086: INFO: namespace deployment-112 deletion completed in 8.100530748s • [SLOW TEST:31.275 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 5 13:12:02.086: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0705 13:12:42.612501 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jul 5 13:12:42.612: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 5 13:12:42.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-746" for this suite. Jul 5 13:12:50.646: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 13:12:50.715: INFO: namespace gc-746 deletion completed in 8.097850289s • [SLOW TEST:48.628 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 5 13:12:50.715: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jul 5 13:12:51.143: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-3188' Jul 5 13:12:54.794: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jul 5 13:12:54.794: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426 Jul 5 13:12:56.805: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-3188' Jul 5 13:12:57.074: INFO: stderr: "" Jul 5 13:12:57.074: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 5 13:12:57.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3188" for this suite. Jul 5 13:13:03.144: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 13:13:03.218: INFO: namespace kubectl-3188 deletion completed in 6.140094841s • [SLOW TEST:12.504 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 5 13:13:03.219: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-39404cb2-9ecd-4c1f-9643-6f3981a87203 STEP: Creating a pod to test consume secrets Jul 5 13:13:03.296: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9f103284-57bd-4d77-aa6a-bdd014a7b9d9" in namespace "projected-6833" to be "success or failure" Jul 5 13:13:03.339: INFO: Pod "pod-projected-secrets-9f103284-57bd-4d77-aa6a-bdd014a7b9d9": Phase="Pending", Reason="", readiness=false. Elapsed: 42.327072ms Jul 5 13:13:05.342: INFO: Pod "pod-projected-secrets-9f103284-57bd-4d77-aa6a-bdd014a7b9d9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046020177s Jul 5 13:13:07.346: INFO: Pod "pod-projected-secrets-9f103284-57bd-4d77-aa6a-bdd014a7b9d9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050007361s STEP: Saw pod success Jul 5 13:13:07.346: INFO: Pod "pod-projected-secrets-9f103284-57bd-4d77-aa6a-bdd014a7b9d9" satisfied condition "success or failure" Jul 5 13:13:07.349: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-9f103284-57bd-4d77-aa6a-bdd014a7b9d9 container projected-secret-volume-test: STEP: delete the pod Jul 5 13:13:07.387: INFO: Waiting for pod pod-projected-secrets-9f103284-57bd-4d77-aa6a-bdd014a7b9d9 to disappear Jul 5 13:13:07.422: INFO: Pod pod-projected-secrets-9f103284-57bd-4d77-aa6a-bdd014a7b9d9 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 5 13:13:07.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6833" for this suite. Jul 5 13:13:13.437: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 13:13:13.511: INFO: namespace projected-6833 deletion completed in 6.084754949s • [SLOW TEST:10.292 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 5 13:13:13.512: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 5 13:13:18.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5378" for this suite. Jul 5 13:13:40.833: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 13:13:40.907: INFO: namespace replication-controller-5378 deletion completed in 22.110407054s • [SLOW TEST:27.396 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 5 13:13:40.908: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Jul 5 13:13:40.986: INFO: Waiting up to 5m0s for pod "pod-a8095c7c-5211-45f1-afc6-ba53c7af1afa" in namespace "emptydir-9983" to be "success or failure" Jul 5 13:13:40.991: INFO: Pod "pod-a8095c7c-5211-45f1-afc6-ba53c7af1afa": Phase="Pending", Reason="", readiness=false. Elapsed: 5.448038ms Jul 5 13:13:42.996: INFO: Pod "pod-a8095c7c-5211-45f1-afc6-ba53c7af1afa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010334215s Jul 5 13:13:45.001: INFO: Pod "pod-a8095c7c-5211-45f1-afc6-ba53c7af1afa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014773193s STEP: Saw pod success Jul 5 13:13:45.001: INFO: Pod "pod-a8095c7c-5211-45f1-afc6-ba53c7af1afa" satisfied condition "success or failure" Jul 5 13:13:45.004: INFO: Trying to get logs from node iruya-worker pod pod-a8095c7c-5211-45f1-afc6-ba53c7af1afa container test-container: STEP: delete the pod Jul 5 13:13:45.023: INFO: Waiting for pod pod-a8095c7c-5211-45f1-afc6-ba53c7af1afa to disappear Jul 5 13:13:45.028: INFO: Pod pod-a8095c7c-5211-45f1-afc6-ba53c7af1afa no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jul 5 13:13:45.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9983" for this suite. Jul 5 13:13:51.062: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jul 5 13:13:51.136: INFO: namespace emptydir-9983 deletion completed in 6.104441662s • [SLOW TEST:10.229 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jul 5 13:13:51.137: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jul 5 13:13:51.256: INFO: (0) /api/v1/nodes/iruya-worker/proxy/logs/:
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-5djk7 in namespace proxy-4455
I0705 13:13:57.546250       6 runners.go:180] Created replication controller with name: proxy-service-5djk7, namespace: proxy-4455, replica count: 1
I0705 13:13:58.596723       6 runners.go:180] proxy-service-5djk7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0705 13:13:59.596910       6 runners.go:180] proxy-service-5djk7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0705 13:14:00.597100       6 runners.go:180] proxy-service-5djk7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0705 13:14:01.597513       6 runners.go:180] proxy-service-5djk7 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0705 13:14:02.597701       6 runners.go:180] proxy-service-5djk7 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0705 13:14:03.597928       6 runners.go:180] proxy-service-5djk7 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0705 13:14:04.598154       6 runners.go:180] proxy-service-5djk7 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0705 13:14:05.598363       6 runners.go:180] proxy-service-5djk7 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jul  5 13:14:05.600: INFO: setup took 8.133510241s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Jul  5 13:14:05.604: INFO: (0) /api/v1/namespaces/proxy-4455/pods/http:proxy-service-5djk7-6mxlx:1080/proxy/: ... (200; 3.945585ms)
Jul  5 13:14:05.605: INFO: (0) /api/v1/namespaces/proxy-4455/pods/proxy-service-5djk7-6mxlx:1080/proxy/: test<... (200; 4.547056ms)
Jul  5 13:14:05.606: INFO: (0) /api/v1/namespaces/proxy-4455/pods/proxy-service-5djk7-6mxlx/proxy/: test (200; 5.667597ms)
Jul  5 13:14:05.607: INFO: (0) /api/v1/namespaces/proxy-4455/pods/http:proxy-service-5djk7-6mxlx:160/proxy/: foo (200; 6.375139ms)
Jul  5 13:14:05.610: INFO: (0) /api/v1/namespaces/proxy-4455/services/http:proxy-service-5djk7:portname1/proxy/: foo (200; 9.45021ms)
Jul  5 13:14:05.610: INFO: (0) /api/v1/namespaces/proxy-4455/services/proxy-service-5djk7:portname1/proxy/: foo (200; 9.395283ms)
Jul  5 13:14:05.610: INFO: (0) /api/v1/namespaces/proxy-4455/pods/proxy-service-5djk7-6mxlx:160/proxy/: foo (200; 9.490559ms)
Jul  5 13:14:05.610: INFO: (0) /api/v1/namespaces/proxy-4455/services/http:proxy-service-5djk7:portname2/proxy/: bar (200; 9.755527ms)
Jul  5 13:14:05.610: INFO: (0) /api/v1/namespaces/proxy-4455/pods/http:proxy-service-5djk7-6mxlx:162/proxy/: bar (200; 9.768146ms)
Jul  5 13:14:05.610: INFO: (0) /api/v1/namespaces/proxy-4455/services/proxy-service-5djk7:portname2/proxy/: bar (200; 9.721234ms)
Jul  5 13:14:05.610: INFO: (0) /api/v1/namespaces/proxy-4455/pods/proxy-service-5djk7-6mxlx:162/proxy/: bar (200; 9.812875ms)
Jul  5 13:14:05.612: INFO: (0) /api/v1/namespaces/proxy-4455/pods/https:proxy-service-5djk7-6mxlx:460/proxy/: tls baz (200; 11.331435ms)
Jul  5 13:14:05.613: INFO: (0) /api/v1/namespaces/proxy-4455/pods/https:proxy-service-5djk7-6mxlx:443/proxy/: test (200; 5.480122ms)
Jul  5 13:14:05.621: INFO: (1) /api/v1/namespaces/proxy-4455/pods/proxy-service-5djk7-6mxlx:1080/proxy/: test<... (200; 5.56773ms)
Jul  5 13:14:05.621: INFO: (1) /api/v1/namespaces/proxy-4455/pods/http:proxy-service-5djk7-6mxlx:160/proxy/: foo (200; 5.626305ms)
Jul  5 13:14:05.621: INFO: (1) /api/v1/namespaces/proxy-4455/pods/https:proxy-service-5djk7-6mxlx:462/proxy/: tls qux (200; 5.648489ms)
Jul  5 13:14:05.621: INFO: (1) /api/v1/namespaces/proxy-4455/pods/proxy-service-5djk7-6mxlx:160/proxy/: foo (200; 5.593617ms)
Jul  5 13:14:05.621: INFO: (1) /api/v1/namespaces/proxy-4455/pods/https:proxy-service-5djk7-6mxlx:443/proxy/: ... (200; 5.649338ms)
Jul  5 13:14:05.622: INFO: (1) /api/v1/namespaces/proxy-4455/services/http:proxy-service-5djk7:portname1/proxy/: foo (200; 6.451323ms)
Jul  5 13:14:05.622: INFO: (1) /api/v1/namespaces/proxy-4455/services/http:proxy-service-5djk7:portname2/proxy/: bar (200; 6.581852ms)
Jul  5 13:14:05.622: INFO: (1) /api/v1/namespaces/proxy-4455/services/https:proxy-service-5djk7:tlsportname1/proxy/: tls baz (200; 6.737932ms)
Jul  5 13:14:05.623: INFO: (1) /api/v1/namespaces/proxy-4455/services/proxy-service-5djk7:portname2/proxy/: bar (200; 7.205307ms)
Jul  5 13:14:05.623: INFO: (1) /api/v1/namespaces/proxy-4455/services/proxy-service-5djk7:portname1/proxy/: foo (200; 7.274414ms)
Jul  5 13:14:05.623: INFO: (1) /api/v1/namespaces/proxy-4455/services/https:proxy-service-5djk7:tlsportname2/proxy/: tls qux (200; 7.71184ms)
Jul  5 13:14:05.628: INFO: (2) /api/v1/namespaces/proxy-4455/pods/http:proxy-service-5djk7-6mxlx:1080/proxy/: ... (200; 4.466044ms)
Jul  5 13:14:05.628: INFO: (2) /api/v1/namespaces/proxy-4455/pods/http:proxy-service-5djk7-6mxlx:162/proxy/: bar (200; 4.442961ms)
Jul  5 13:14:05.628: INFO: (2) /api/v1/namespaces/proxy-4455/pods/http:proxy-service-5djk7-6mxlx:160/proxy/: foo (200; 4.532394ms)
Jul  5 13:14:05.629: INFO: (2) /api/v1/namespaces/proxy-4455/pods/proxy-service-5djk7-6mxlx/proxy/: test (200; 5.906665ms)
Jul  5 13:14:05.630: INFO: (2) /api/v1/namespaces/proxy-4455/services/proxy-service-5djk7:portname1/proxy/: foo (200; 6.322305ms)
Jul  5 13:14:05.630: INFO: (2) /api/v1/namespaces/proxy-4455/pods/proxy-service-5djk7-6mxlx:1080/proxy/: test<... (200; 6.309028ms)
Jul  5 13:14:05.630: INFO: (2) /api/v1/namespaces/proxy-4455/pods/https:proxy-service-5djk7-6mxlx:443/proxy/: ... (200; 2.955285ms)
Jul  5 13:14:05.634: INFO: (3) /api/v1/namespaces/proxy-4455/pods/https:proxy-service-5djk7-6mxlx:460/proxy/: tls baz (200; 3.137314ms)
Jul  5 13:14:05.634: INFO: (3) /api/v1/namespaces/proxy-4455/pods/proxy-service-5djk7-6mxlx:1080/proxy/: test<... (200; 3.138097ms)
Jul  5 13:14:05.635: INFO: (3) /api/v1/namespaces/proxy-4455/pods/proxy-service-5djk7-6mxlx:162/proxy/: bar (200; 4.052887ms)
Jul  5 13:14:05.635: INFO: (3) /api/v1/namespaces/proxy-4455/pods/http:proxy-service-5djk7-6mxlx:162/proxy/: bar (200; 4.313086ms)
Jul  5 13:14:05.635: INFO: (3) /api/v1/namespaces/proxy-4455/pods/https:proxy-service-5djk7-6mxlx:462/proxy/: tls qux (200; 4.306582ms)
Jul  5 13:14:05.635: INFO: (3) /api/v1/namespaces/proxy-4455/pods/http:proxy-service-5djk7-6mxlx:160/proxy/: foo (200; 4.315948ms)
Jul  5 13:14:05.635: INFO: (3) /api/v1/namespaces/proxy-4455/pods/proxy-service-5djk7-6mxlx/proxy/: test (200; 4.382868ms)
Jul  5 13:14:05.635: INFO: (3) /api/v1/namespaces/proxy-4455/pods/proxy-service-5djk7-6mxlx:160/proxy/: foo (200; 4.391997ms)
Jul  5 13:14:05.635: INFO: (3) /api/v1/namespaces/proxy-4455/pods/https:proxy-service-5djk7-6mxlx:443/proxy/: ... (200; 2.984189ms)
Jul  5 13:14:05.641: INFO: (4) /api/v1/namespaces/proxy-4455/pods/http:proxy-service-5djk7-6mxlx:162/proxy/: bar (200; 3.428071ms)
Jul  5 13:14:05.641: INFO: (4) /api/v1/namespaces/proxy-4455/pods/https:proxy-service-5djk7-6mxlx:443/proxy/: test (200; 3.677221ms)
Jul  5 13:14:05.641: INFO: (4) /api/v1/namespaces/proxy-4455/pods/proxy-service-5djk7-6mxlx:162/proxy/: bar (200; 3.671736ms)
Jul  5 13:14:05.642: INFO: (4) /api/v1/namespaces/proxy-4455/pods/proxy-service-5djk7-6mxlx:1080/proxy/: test<... (200; 3.971485ms)
Jul  5 13:14:05.642: INFO: (4) /api/v1/namespaces/proxy-4455/pods/https:proxy-service-5djk7-6mxlx:462/proxy/: tls qux (200; 4.039926ms)
Jul  5 13:14:05.642: INFO: (4) /api/v1/namespaces/proxy-4455/services/proxy-service-5djk7:portname2/proxy/: bar (200; 4.40132ms)
Jul  5 13:14:05.642: INFO: (4) /api/v1/namespaces/proxy-4455/services/proxy-service-5djk7:portname1/proxy/: foo (200; 4.55248ms)
Jul  5 13:14:05.642: INFO: (4) /api/v1/namespaces/proxy-4455/services/http:proxy-service-5djk7:portname2/proxy/: bar (200; 4.639037ms)
Jul  5 13:14:05.642: INFO: (4) /api/v1/namespaces/proxy-4455/services/http:proxy-service-5djk7:portname1/proxy/: foo (200; 4.581739ms)
Jul  5 13:14:05.642: INFO: (4) /api/v1/namespaces/proxy-4455/services/https:proxy-service-5djk7:tlsportname2/proxy/: tls qux (200; 4.684186ms)
Jul  5 13:14:05.643: INFO: (4) /api/v1/namespaces/proxy-4455/services/https:proxy-service-5djk7:tlsportname1/proxy/: tls baz (200; 5.177921ms)
Jul  5 13:14:05.669: INFO: (5) /api/v1/namespaces/proxy-4455/pods/proxy-service-5djk7-6mxlx:1080/proxy/: test<... (200; 26.016077ms)
Jul  5 13:14:05.670: INFO: (5) /api/v1/namespaces/proxy-4455/pods/proxy-service-5djk7-6mxlx:162/proxy/: bar (200; 26.744996ms)
Jul  5 13:14:05.670: INFO: (5) /api/v1/namespaces/proxy-4455/pods/http:proxy-service-5djk7-6mxlx:160/proxy/: foo (200; 26.762412ms)
Jul  5 13:14:05.670: INFO: (5) /api/v1/namespaces/proxy-4455/pods/https:proxy-service-5djk7-6mxlx:462/proxy/: tls qux (200; 26.858174ms)
Jul  5 13:14:05.670: INFO: (5) /api/v1/namespaces/proxy-4455/pods/proxy-service-5djk7-6mxlx/proxy/: test (200; 26.898407ms)
Jul  5 13:14:05.670: INFO: (5) /api/v1/namespaces/proxy-4455/pods/proxy-service-5djk7-6mxlx:160/proxy/: foo (200; 26.856843ms)
Jul  5 13:14:05.670: INFO: (5) /api/v1/namespaces/proxy-4455/pods/https:proxy-service-5djk7-6mxlx:443/proxy/: ... (200; 26.897952ms)
Jul  5 13:14:05.670: INFO: (5) /api/v1/namespaces/proxy-4455/pods/https:proxy-service-5djk7-6mxlx:460/proxy/: tls baz (200; 26.943274ms)
Jul  5 13:14:05.670: INFO: (5) /api/v1/namespaces/proxy-4455/pods/http:proxy-service-5djk7-6mxlx:162/proxy/: bar (200; 26.925177ms)
Jul  5 13:14:05.671: INFO: (5) /api/v1/namespaces/proxy-4455/services/https:proxy-service-5djk7:tlsportname1/proxy/: tls baz (200; 28.003019ms)
Jul  5 13:14:05.671: INFO: (5) /api/v1/namespaces/proxy-4455/services/proxy-service-5djk7:portname1/proxy/: foo (200; 27.981167ms)
Jul  5 13:14:05.671: INFO: (5) /api/v1/namespaces/proxy-4455/services/proxy-service-5djk7:portname2/proxy/: bar (200; 28.045703ms)
Jul  5 13:14:05.671: INFO: (5) /api/v1/namespaces/proxy-4455/services/http:proxy-service-5djk7:portname2/proxy/: bar (200; 28.049163ms)
Jul  5 13:14:05.671: INFO: (5) /api/v1/namespaces/proxy-4455/services/http:proxy-service-5djk7:portname1/proxy/: foo (200; 28.055084ms)
Jul  5 13:14:05.671: INFO: (5) /api/v1/namespaces/proxy-4455/services/https:proxy-service-5djk7:tlsportname2/proxy/: tls qux (200; 28.079575ms)
Jul  5 13:14:05.675: INFO: (6) /api/v1/namespaces/proxy-4455/pods/http:proxy-service-5djk7-6mxlx:160/proxy/: foo (200; 3.800395ms)
Jul  5 13:14:05.675: INFO: (6) /api/v1/namespaces/proxy-4455/pods/proxy-service-5djk7-6mxlx:160/proxy/: foo (200; 3.813584ms)
Jul  5 13:14:05.675: INFO: (6) /api/v1/namespaces/proxy-4455/pods/https:proxy-service-5djk7-6mxlx:443/proxy/: test<... (200; 4.146209ms)
Jul  5 13:14:05.675: INFO: (6) /api/v1/namespaces/proxy-4455/pods/proxy-service-5djk7-6mxlx/proxy/: test (200; 4.163431ms)
Jul  5 13:14:05.676: INFO: (6) /api/v1/namespaces/proxy-4455/pods/https:proxy-service-5djk7-6mxlx:462/proxy/: tls qux (200; 4.608167ms)
Jul  5 13:14:05.676: INFO: (6) /api/v1/namespaces/proxy-4455/pods/https:proxy-service-5djk7-6mxlx:460/proxy/: tls baz (200; 4.589153ms)
Jul  5 13:14:05.676: INFO: (6) /api/v1/namespaces/proxy-4455/pods/proxy-service-5djk7-6mxlx:162/proxy/: bar (200; 4.57707ms)
Jul  5 13:14:05.676: INFO: (6) /api/v1/namespaces/proxy-4455/pods/http:proxy-service-5djk7-6mxlx:1080/proxy/: ... (200; 4.788599ms)
Jul  5 13:14:05.677: INFO: (6) /api/v1/namespaces/proxy-4455/services/http:proxy-service-5djk7:portname1/proxy/: foo (200; 5.69453ms)
Jul  5 13:14:05.677: INFO: (6) /api/v1/namespaces/proxy-4455/services/proxy-service-5djk7:portname1/proxy/: foo (200; 5.929218ms)
Jul  5 13:14:05.677: INFO: (6) /api/v1/namespaces/proxy-4455/services/proxy-service-5djk7:portname2/proxy/: bar (200; 6.101627ms)
Jul  5 13:14:05.677: INFO: (6) /api/v1/namespaces/proxy-4455/services/http:proxy-service-5djk7:portname2/proxy/: bar (200; 6.339972ms)
Jul  5 13:14:05.677: INFO: (6) /api/v1/namespaces/proxy-4455/services/https:proxy-service-5djk7:tlsportname2/proxy/: tls qux (200; 6.401201ms)
Jul  5 13:14:05.678: INFO: (6) /api/v1/namespaces/proxy-4455/services/https:proxy-service-5djk7:tlsportname1/proxy/: tls baz (200; 6.510095ms)
Jul  5 13:14:05.682: INFO: (7) /api/v1/namespaces/proxy-4455/pods/http:proxy-service-5djk7-6mxlx:162/proxy/: bar (200; 4.126573ms)
Jul  5 13:14:05.682: INFO: (7) /api/v1/namespaces/proxy-4455/pods/https:proxy-service-5djk7-6mxlx:443/proxy/: test<... (200; 4.4285ms)
Jul  5 13:14:05.682: INFO: (7) /api/v1/namespaces/proxy-4455/pods/proxy-service-5djk7-6mxlx/proxy/: test (200; 4.616322ms)
Jul  5 13:14:05.682: INFO: (7) /api/v1/namespaces/proxy-4455/pods/https:proxy-service-5djk7-6mxlx:460/proxy/: tls baz (200; 4.85049ms)
Jul  5 13:14:05.683: INFO: (7) /api/v1/namespaces/proxy-4455/pods/http:proxy-service-5djk7-6mxlx:1080/proxy/: ... (200; 5.113044ms)
Jul  5 13:14:05.683: INFO: (7) /api/v1/namespaces/proxy-4455/pods/http:proxy-service-5djk7-6mxlx:160/proxy/: foo (200; 5.176393ms)
Jul  5 13:14:05.683: INFO: (7) /api/v1/namespaces/proxy-4455/pods/proxy-service-5djk7-6mxlx:162/proxy/: bar (200; 5.147516ms)
Jul  5 13:14:05.683: INFO: (7) /api/v1/namespaces/proxy-4455/pods/https:proxy-service-5djk7-6mxlx:462/proxy/: tls qux (200; 5.099441ms)
Jul  5 13:14:05.683: INFO: (7) /api/v1/namespaces/proxy-4455/services/proxy-service-5djk7:portname2/proxy/: bar (200; 5.313329ms)
Jul  5 13:14:05.683: INFO: (7) /api/v1/namespaces/proxy-4455/pods/proxy-service-5djk7-6mxlx:160/proxy/: foo (200; 5.62436ms)
Jul  5 13:14:05.684: INFO: (7) /api/v1/namespaces/proxy-4455/services/proxy-service-5djk7:portname1/proxy/: foo (200; 5.858066ms)
Jul  5 13:14:05.684: INFO: (7) /api/v1/namespaces/proxy-4455/services/https:proxy-service-5djk7:tlsportname2/proxy/: tls qux (200; 6.021606ms)
Jul  5 13:14:05.684: INFO: (7) /api/v1/namespaces/proxy-4455/services/http:proxy-service-5djk7:portname2/proxy/: bar (200; 6.061652ms)
Jul  5 13:14:05.684: INFO: (7) /api/v1/namespaces/proxy-4455/services/https:proxy-service-5djk7:tlsportname1/proxy/: tls baz (200; 6.183739ms)
Jul  5 13:14:05.684: INFO: (7) /api/v1/namespaces/proxy-4455/services/http:proxy-service-5djk7:portname1/proxy/: foo (200; 6.101046ms)
Jul  5 13:14:05.693: INFO: (8) /api/v1/namespaces/proxy-4455/pods/proxy-service-5djk7-6mxlx:1080/proxy/: test<... (200; 9.304293ms)
Jul  5 13:14:05.693: INFO: (8) /api/v1/namespaces/proxy-4455/services/https:proxy-service-5djk7:tlsportname1/proxy/: tls baz (200; 9.437248ms)
Jul  5 13:14:05.694: INFO: (8) /api/v1/namespaces/proxy-4455/pods/https:proxy-service-5djk7-6mxlx:460/proxy/: tls baz (200; 9.629176ms)
Jul  5 13:14:05.694: INFO: (8) /api/v1/namespaces/proxy-4455/pods/proxy-service-5djk7-6mxlx/proxy/: test (200; 9.571661ms)
Jul  5 13:14:05.694: INFO: (8) /api/v1/namespaces/proxy-4455/pods/proxy-service-5djk7-6mxlx:162/proxy/: bar (200; 10.282342ms)
Jul  5 13:14:05.695: INFO: (8) /api/v1/namespaces/proxy-4455/pods/http:proxy-service-5djk7-6mxlx:1080/proxy/: ... (200; 10.786106ms)
Jul  5 13:14:05.695: INFO: (8) /api/v1/namespaces/proxy-4455/services/https:proxy-service-5djk7:tlsportname2/proxy/: tls qux (200; 10.784484ms)
Jul  5 13:14:05.695: INFO: (8) /api/v1/namespaces/proxy-4455/services/http:proxy-service-5djk7:portname1/proxy/: foo (200; 10.843698ms)
Jul  5 13:14:05.695: INFO: (8) /api/v1/namespaces/proxy-4455/pods/http:proxy-service-5djk7-6mxlx:160/proxy/: foo (200; 10.899006ms)
Jul  5 13:14:05.695: INFO: (8) /api/v1/namespaces/proxy-4455/services/proxy-service-5djk7:portname2/proxy/: bar (200; 10.870334ms)
Jul  5 13:14:05.695: INFO: (8) /api/v1/namespaces/proxy-4455/pods/https:proxy-service-5djk7-6mxlx:443/proxy/: test (200; 4.029967ms)
Jul  5 13:14:05.699: INFO: (9) /api/v1/namespaces/proxy-4455/pods/https:proxy-service-5djk7-6mxlx:443/proxy/: test<... (200; 5.628625ms)
Jul  5 13:14:05.701: INFO: (9) /api/v1/namespaces/proxy-4455/pods/proxy-service-5djk7-6mxlx:160/proxy/: foo (200; 5.726559ms)
Jul  5 13:14:05.701: INFO: (9) /api/v1/namespaces/proxy-4455/pods/http:proxy-service-5djk7-6mxlx:1080/proxy/: ... (200; 5.65462ms)
Jul  5 13:14:05.701: INFO: (9) /api/v1/namespaces/proxy-4455/services/proxy-service-5djk7:portname2/proxy/: bar (200; 5.699264ms)
Jul  5 13:14:05.701: INFO: (9) /api/v1/namespaces/proxy-4455/services/http:proxy-service-5djk7:portname2/proxy/: bar (200; 5.793792ms)
Jul  5 13:14:05.701: INFO: (9) /api/v1/namespaces/proxy-4455/services/proxy-service-5djk7:portname1/proxy/: foo (200; 5.958011ms)
Jul  5 13:14:05.701: INFO: (9) /api/v1/namespaces/proxy-4455/pods/http:proxy-service-5djk7-6mxlx:162/proxy/: bar (200; 5.989579ms)
Jul  5 13:14:05.701: INFO: (9) /api/v1/namespaces/proxy-4455/services/http:proxy-service-5djk7:portname1/proxy/: foo (200; 6.151014ms)
Jul  5 13:14:05.702: INFO: (9) /api/v1/namespaces/proxy-4455/pods/http:proxy-service-5djk7-6mxlx:160/proxy/: foo (200; 6.246199ms)
Jul  5 13:14:05.702: INFO: (9) /api/v1/namespaces/proxy-4455/services/https:proxy-service-5djk7:tlsportname2/proxy/: tls qux (200; 6.638616ms)
Jul  5 13:14:05.702: INFO: (9) /api/v1/namespaces/proxy-4455/pods/https:proxy-service-5djk7-6mxlx:462/proxy/: tls qux (200; 6.731453ms)
Jul  5 13:14:05.702: INFO: (9) /api/v1/namespaces/proxy-4455/services/https:proxy-service-5djk7:tlsportname1/proxy/: tls baz (200; 6.73624ms)
Jul  5 13:14:05.702: INFO: (9) /api/v1/namespaces/proxy-4455/pods/https:proxy-service-5djk7-6mxlx:460/proxy/: tls baz (200; 6.896614ms)
Jul  5 13:14:05.705: INFO: (10) /api/v1/namespaces/proxy-4455/pods/https:proxy-service-5djk7-6mxlx:443/proxy/: test (200; 2.866704ms)
Jul  5 13:14:05.705: INFO: (10) /api/v1/namespaces/proxy-4455/pods/http:proxy-service-5djk7-6mxlx:1080/proxy/: ... (200; 3.011414ms)
Jul  5 13:14:05.705: INFO: (10) /api/v1/namespaces/proxy-4455/pods/https:proxy-service-5djk7-6mxlx:460/proxy/: tls baz (200; 3.086856ms)
Jul  5 13:14:05.706: INFO: (10) /api/v1/namespaces/proxy-4455/pods/http:proxy-service-5djk7-6mxlx:162/proxy/: bar (200; 4.182787ms)
Jul  5 13:14:05.706: INFO: (10) /api/v1/namespaces/proxy-4455/pods/proxy-service-5djk7-6mxlx:162/proxy/: bar (200; 4.197438ms)
Jul  5 13:14:05.706: INFO: (10) /api/v1/namespaces/proxy-4455/pods/proxy-service-5djk7-6mxlx:1080/proxy/: test<... (200; 4.26657ms)
Jul  5 13:14:05.707: INFO: (10) /api/v1/namespaces/proxy-4455/pods/https:proxy-service-5djk7-6mxlx:462/proxy/: tls qux (200; 4.487711ms)
Jul  5 13:14:05.707: INFO: (10) /api/v1/namespaces/proxy-4455/services/http:proxy-service-5djk7:portname2/proxy/: bar (200; 4.66626ms)
Jul  5 13:14:05.707: INFO: (10) /api/v1/namespaces/proxy-4455/services/proxy-service-5djk7:portname1/proxy/: foo (200; 4.867982ms)
Jul  5 13:14:05.707: INFO: (10) /api/v1/namespaces/proxy-4455/pods/proxy-service-5djk7-6mxlx:160/proxy/: foo (200; 4.906806ms)
Jul  5 13:14:05.707: INFO: (10) /api/v1/namespaces/proxy-4455/services/proxy-service-5djk7:portname2/proxy/: bar (200; 4.921458ms)
Jul  5 13:14:05.707: INFO: (10) /api/v1/namespaces/proxy-4455/services/https:proxy-service-5djk7:tlsportname2/proxy/: tls qux (200; 4.894818ms)
Jul  5 13:14:05.708: INFO: (10) /api/v1/namespaces/proxy-4455/services/http:proxy-service-5djk7:portname1/proxy/: foo (200; 5.854509ms)
Jul  5 13:14:05.708: INFO: (10) /api/v1/namespaces/proxy-4455/services/https:proxy-service-5djk7:tlsportname1/proxy/: tls baz (200; 5.876725ms)
Jul  5 13:14:05.712: INFO: (11) /api/v1/namespaces/proxy-4455/pods/proxy-service-5djk7-6mxlx/proxy/: test (200; 3.460678ms)
Jul  5 13:14:05.712: INFO: (11) /api/v1/namespaces/proxy-4455/pods/https:proxy-service-5djk7-6mxlx:443/proxy/: test<... (200; 4.44854ms)
Jul  5 13:14:05.713: INFO: (11) /api/v1/namespaces/proxy-4455/pods/http:proxy-service-5djk7-6mxlx:160/proxy/: foo (200; 4.449159ms)
Jul  5 13:14:05.713: INFO: (11) /api/v1/namespaces/proxy-4455/pods/http:proxy-service-5djk7-6mxlx:1080/proxy/: ... (200; 4.446465ms)
Jul  5 13:14:05.713: INFO: (11) /api/v1/namespaces/proxy-4455/pods/https:proxy-service-5djk7-6mxlx:462/proxy/: tls qux (200; 4.483702ms)
Jul  5 13:14:05.713: INFO: (11) /api/v1/namespaces/proxy-4455/pods/https:proxy-service-5djk7-6mxlx:460/proxy/: tls baz (200; 4.728478ms)
Jul  5 13:14:05.714: INFO: (11) /api/v1/namespaces/proxy-4455/services/http:proxy-service-5djk7:portname1/proxy/: foo (200; 5.690095ms)
Jul  5 13:14:05.714: INFO: (11) /api/v1/namespaces/proxy-4455/services/proxy-service-5djk7:portname2/proxy/: bar (200; 5.701112ms)
Jul  5 13:14:05.714: INFO: (11) /api/v1/namespaces/proxy-4455/services/proxy-service-5djk7:portname1/proxy/: foo (200; 6.012839ms)
Jul  5 13:14:05.714: INFO: (11) /api/v1/namespaces/proxy-4455/services/http:proxy-service-5djk7:portname2/proxy/: bar (200; 6.004874ms)
Jul  5 13:14:05.714: INFO: (11) /api/v1/namespaces/proxy-4455/services/https:proxy-service-5djk7:tlsportname1/proxy/: tls baz (200; 6.046993ms)
Jul  5 13:14:05.714: INFO: (11) /api/v1/namespaces/proxy-4455/services/https:proxy-service-5djk7:tlsportname2/proxy/: tls qux (200; 6.000278ms)
Jul  5 13:14:05.716: INFO: (12) /api/v1/namespaces/proxy-4455/pods/proxy-service-5djk7-6mxlx/proxy/: test (200; 1.674941ms)
Jul  5 13:14:05.718: INFO: (12) /api/v1/namespaces/proxy-4455/pods/proxy-service-5djk7-6mxlx:1080/proxy/: test<... (200; 4.092053ms)
Jul  5 13:14:05.719: INFO: (12) /api/v1/namespaces/proxy-4455/pods/http:proxy-service-5djk7-6mxlx:160/proxy/: foo (200; 4.301415ms)
Jul  5 13:14:05.719: INFO: (12) /api/v1/namespaces/proxy-4455/pods/http:proxy-service-5djk7-6mxlx:1080/proxy/: ... (200; 4.270204ms)
Jul  5 13:14:05.719: INFO: (12) /api/v1/namespaces/proxy-4455/pods/proxy-service-5djk7-6mxlx:162/proxy/: bar (200; 4.306053ms)
Jul  5 13:14:05.719: INFO: (12) /api/v1/namespaces/proxy-4455/pods/proxy-service-5djk7-6mxlx:160/proxy/: foo (200; 4.333297ms)
Jul  5 13:14:05.719: INFO: (12) /api/v1/namespaces/proxy-4455/pods/http:proxy-service-5djk7-6mxlx:162/proxy/: bar (200; 4.30643ms)
Jul  5 13:14:05.719: INFO: (12) /api/v1/namespaces/proxy-4455/pods/https:proxy-service-5djk7-6mxlx:462/proxy/: tls qux (200; 4.412614ms)
Jul  5 13:14:05.719: INFO: (12) /api/v1/namespaces/proxy-4455/pods/https:proxy-service-5djk7-6mxlx:460/proxy/: tls baz (200; 4.33267ms)
Jul  5 13:14:05.719: INFO: (12) /api/v1/namespaces/proxy-4455/pods/https:proxy-service-5djk7-6mxlx:443/proxy/: test (200; 4.38835ms)
Jul  5 13:14:05.725: INFO: (13) /api/v1/namespaces/proxy-4455/pods/http:proxy-service-5djk7-6mxlx:160/proxy/: foo (200; 4.34261ms)
Jul  5 13:14:05.725: INFO: (13) /api/v1/namespaces/proxy-4455/pods/proxy-service-5djk7-6mxlx:1080/proxy/: test<... (200; 4.379502ms)
Jul  5 13:14:05.725: INFO: (13) /api/v1/namespaces/proxy-4455/pods/https:proxy-service-5djk7-6mxlx:462/proxy/: tls qux (200; 4.451545ms)
Jul  5 13:14:05.725: INFO: (13) /api/v1/namespaces/proxy-4455/pods/http:proxy-service-5djk7-6mxlx:1080/proxy/: ... (200; 4.404088ms)
Jul  5 13:14:05.725: INFO: (13) /api/v1/namespaces/proxy-4455/pods/http:proxy-service-5djk7-6mxlx:162/proxy/: bar (200; 4.484736ms)
Jul  5 13:14:05.725: INFO: (13) /api/v1/namespaces/proxy-4455/pods/https:proxy-service-5djk7-6mxlx:460/proxy/: tls baz (200; 4.610863ms)
Jul  5 13:14:05.726: INFO: (13) /api/v1/namespaces/proxy-4455/services/proxy-service-5djk7:portname1/proxy/: foo (200; 5.819591ms)
Jul  5 13:14:05.726: INFO: (13) /api/v1/namespaces/proxy-4455/pods/proxy-service-5djk7-6mxlx:162/proxy/: bar (200; 5.844862ms)
Jul  5 13:14:05.726: INFO: (13) /api/v1/namespaces/proxy-4455/pods/https:proxy-service-5djk7-6mxlx:443/proxy/: test (200; 4.792631ms)
Jul  5 13:14:05.732: INFO: (14) /api/v1/namespaces/proxy-4455/pods/http:proxy-service-5djk7-6mxlx:1080/proxy/: ... (200; 4.77895ms)
Jul  5 13:14:05.732: INFO: (14) /api/v1/namespaces/proxy-4455/pods/http:proxy-service-5djk7-6mxlx:162/proxy/: bar (200; 4.85945ms)
Jul  5 13:14:05.732: INFO: (14) /api/v1/namespaces/proxy-4455/pods/https:proxy-service-5djk7-6mxlx:460/proxy/: tls baz (200; 4.768864ms)
Jul  5 13:14:05.732: INFO: (14) /api/v1/namespaces/proxy-4455/pods/https:proxy-service-5djk7-6mxlx:443/proxy/: test<... (200; 4.93055ms)
Jul  5 13:14:05.732: INFO: (14) /api/v1/namespaces/proxy-4455/pods/https:proxy-service-5djk7-6mxlx:462/proxy/: tls qux (200; 5.061654ms)
Jul  5 13:14:05.734: INFO: (14) /api/v1/namespaces/proxy-4455/services/proxy-service-5djk7:portname2/proxy/: bar (200; 6.864443ms)
Jul  5 13:14:05.734: INFO: (14) /api/v1/namespaces/proxy-4455/services/http:proxy-service-5djk7:portname2/proxy/: bar (200; 7.00467ms)
Jul  5 13:14:05.734: INFO: (14) /api/v1/namespaces/proxy-4455/services/proxy-service-5djk7:portname1/proxy/: foo (200; 7.122955ms)
Jul  5 13:14:05.734: INFO: (14) /api/v1/namespaces/proxy-4455/services/http:proxy-service-5djk7:portname1/proxy/: foo (200; 7.106497ms)
Jul  5 13:14:05.734: INFO: (14) /api/v1/namespaces/proxy-4455/services/https:proxy-service-5djk7:tlsportname2/proxy/: tls qux (200; 7.229943ms)
Jul  5 13:14:05.735: INFO: (14) /api/v1/namespaces/proxy-4455/services/https:proxy-service-5djk7:tlsportname1/proxy/: tls baz (200; 7.371778ms)
Jul  5 13:14:05.738: INFO: (15) /api/v1/namespaces/proxy-4455/pods/https:proxy-service-5djk7-6mxlx:462/proxy/: tls qux (200; 3.833938ms)
Jul  5 13:14:05.739: INFO: (15) /api/v1/namespaces/proxy-4455/pods/http:proxy-service-5djk7-6mxlx:162/proxy/: bar (200; 3.846899ms)
Jul  5 13:14:05.739: INFO: (15) /api/v1/namespaces/proxy-4455/pods/https:proxy-service-5djk7-6mxlx:443/proxy/: test (200; 4.315458ms)
Jul  5 13:14:05.739: INFO: (15) /api/v1/namespaces/proxy-4455/pods/proxy-service-5djk7-6mxlx:162/proxy/: bar (200; 4.349892ms)
Jul  5 13:14:05.739: INFO: (15) /api/v1/namespaces/proxy-4455/pods/https:proxy-service-5djk7-6mxlx:460/proxy/: tls baz (200; 4.645721ms)
Jul  5 13:14:05.739: INFO: (15) /api/v1/namespaces/proxy-4455/pods/http:proxy-service-5djk7-6mxlx:160/proxy/: foo (200; 4.671118ms)
Jul  5 13:14:05.739: INFO: (15) /api/v1/namespaces/proxy-4455/pods/proxy-service-5djk7-6mxlx:1080/proxy/: test<... (200; 4.567535ms)
Jul  5 13:14:05.739: INFO: (15) /api/v1/namespaces/proxy-4455/pods/http:proxy-service-5djk7-6mxlx:1080/proxy/: ... (200; 4.891715ms)
Jul  5 13:14:05.740: INFO: (15) /api/v1/namespaces/proxy-4455/services/proxy-service-5djk7:portname2/proxy/: bar (200; 5.003493ms)
Jul  5 13:14:05.740: INFO: (15) /api/v1/namespaces/proxy-4455/services/http:proxy-service-5djk7:portname2/proxy/: bar (200; 5.015522ms)
Jul  5 13:14:05.741: INFO: (15) /api/v1/namespaces/proxy-4455/services/http:proxy-service-5djk7:portname1/proxy/: foo (200; 6.613595ms)
Jul  5 13:14:05.741: INFO: (15) /api/v1/namespaces/proxy-4455/services/https:proxy-service-5djk7:tlsportname1/proxy/: tls baz (200; 6.59048ms)
Jul  5 13:14:05.741: INFO: (15) /api/v1/namespaces/proxy-4455/services/proxy-service-5djk7:portname1/proxy/: foo (200; 6.766604ms)
Jul  5 13:14:05.742: INFO: (15) /api/v1/namespaces/proxy-4455/services/https:proxy-service-5djk7:tlsportname2/proxy/: tls qux (200; 7.134917ms)
Jul  5 13:14:05.746: INFO: (16) /api/v1/namespaces/proxy-4455/pods/proxy-service-5djk7-6mxlx:160/proxy/: foo (200; 4.08458ms)
Jul  5 13:14:05.746: INFO: (16) /api/v1/namespaces/proxy-4455/pods/proxy-service-5djk7-6mxlx:1080/proxy/: test<... (200; 4.078854ms)
Jul  5 13:14:05.746: INFO: (16) /api/v1/namespaces/proxy-4455/pods/https:proxy-service-5djk7-6mxlx:443/proxy/: test (200; 6.060972ms)
Jul  5 13:14:05.748: INFO: (16) /api/v1/namespaces/proxy-4455/pods/proxy-service-5djk7-6mxlx:162/proxy/: bar (200; 6.027338ms)
Jul  5 13:14:05.748: INFO: (16) /api/v1/namespaces/proxy-4455/pods/http:proxy-service-5djk7-6mxlx:162/proxy/: bar (200; 6.085949ms)
Jul  5 13:14:05.748: INFO: (16) /api/v1/namespaces/proxy-4455/pods/http:proxy-service-5djk7-6mxlx:1080/proxy/: ... (200; 6.113351ms)
Jul  5 13:14:05.748: INFO: (16) /api/v1/namespaces/proxy-4455/services/http:proxy-service-5djk7:portname2/proxy/: bar (200; 6.386345ms)
Jul  5 13:14:05.749: INFO: (16) /api/v1/namespaces/proxy-4455/services/https:proxy-service-5djk7:tlsportname2/proxy/: tls qux (200; 7.173156ms)
Jul  5 13:14:05.750: INFO: (16) /api/v1/namespaces/proxy-4455/services/proxy-service-5djk7:portname1/proxy/: foo (200; 7.734276ms)
Jul  5 13:14:05.750: INFO: (16) /api/v1/namespaces/proxy-4455/services/http:proxy-service-5djk7:portname1/proxy/: foo (200; 8.072789ms)
Jul  5 13:14:05.750: INFO: (16) /api/v1/namespaces/proxy-4455/services/https:proxy-service-5djk7:tlsportname1/proxy/: tls baz (200; 8.025825ms)
Jul  5 13:14:05.755: INFO: (17) /api/v1/namespaces/proxy-4455/services/http:proxy-service-5djk7:portname2/proxy/: bar (200; 4.803455ms)
Jul  5 13:14:05.755: INFO: (17) /api/v1/namespaces/proxy-4455/services/https:proxy-service-5djk7:tlsportname1/proxy/: tls baz (200; 4.788021ms)
Jul  5 13:14:05.755: INFO: (17) /api/v1/namespaces/proxy-4455/services/proxy-service-5djk7:portname2/proxy/: bar (200; 4.909125ms)
Jul  5 13:14:05.755: INFO: (17) /api/v1/namespaces/proxy-4455/services/proxy-service-5djk7:portname1/proxy/: foo (200; 5.195362ms)
Jul  5 13:14:05.755: INFO: (17) /api/v1/namespaces/proxy-4455/pods/http:proxy-service-5djk7-6mxlx:1080/proxy/: ... (200; 5.169524ms)
Jul  5 13:14:05.755: INFO: (17) /api/v1/namespaces/proxy-4455/services/http:proxy-service-5djk7:portname1/proxy/: foo (200; 5.339555ms)
Jul  5 13:14:05.755: INFO: (17) /api/v1/namespaces/proxy-4455/pods/proxy-service-5djk7-6mxlx:1080/proxy/: test<... (200; 5.332389ms)
Jul  5 13:14:05.755: INFO: (17) /api/v1/namespaces/proxy-4455/pods/proxy-service-5djk7-6mxlx:162/proxy/: bar (200; 5.337916ms)
Jul  5 13:14:05.756: INFO: (17) /api/v1/namespaces/proxy-4455/pods/https:proxy-service-5djk7-6mxlx:443/proxy/: test (200; 5.766074ms)
Jul  5 13:14:05.756: INFO: (17) /api/v1/namespaces/proxy-4455/pods/http:proxy-service-5djk7-6mxlx:162/proxy/: bar (200; 5.76416ms)
Jul  5 13:14:05.756: INFO: (17) /api/v1/namespaces/proxy-4455/pods/proxy-service-5djk7-6mxlx:160/proxy/: foo (200; 5.926373ms)
Jul  5 13:14:05.756: INFO: (17) /api/v1/namespaces/proxy-4455/pods/https:proxy-service-5djk7-6mxlx:462/proxy/: tls qux (200; 6.412582ms)
Jul  5 13:14:05.757: INFO: (17) /api/v1/namespaces/proxy-4455/pods/https:proxy-service-5djk7-6mxlx:460/proxy/: tls baz (200; 6.480989ms)
Jul  5 13:14:05.761: INFO: (18) /api/v1/namespaces/proxy-4455/pods/proxy-service-5djk7-6mxlx:162/proxy/: bar (200; 4.445117ms)
Jul  5 13:14:05.762: INFO: (18) /api/v1/namespaces/proxy-4455/pods/http:proxy-service-5djk7-6mxlx:162/proxy/: bar (200; 5.073906ms)
Jul  5 13:14:05.762: INFO: (18) /api/v1/namespaces/proxy-4455/pods/http:proxy-service-5djk7-6mxlx:1080/proxy/: ... (200; 5.377973ms)
Jul  5 13:14:05.762: INFO: (18) /api/v1/namespaces/proxy-4455/pods/https:proxy-service-5djk7-6mxlx:443/proxy/: test (200; 5.397503ms)
Jul  5 13:14:05.762: INFO: (18) /api/v1/namespaces/proxy-4455/pods/http:proxy-service-5djk7-6mxlx:160/proxy/: foo (200; 5.575726ms)
Jul  5 13:14:05.763: INFO: (18) /api/v1/namespaces/proxy-4455/pods/proxy-service-5djk7-6mxlx:1080/proxy/: test<... (200; 5.731367ms)
Jul  5 13:14:05.763: INFO: (18) /api/v1/namespaces/proxy-4455/pods/proxy-service-5djk7-6mxlx:160/proxy/: foo (200; 5.788097ms)
Jul  5 13:14:05.763: INFO: (18) /api/v1/namespaces/proxy-4455/pods/https:proxy-service-5djk7-6mxlx:462/proxy/: tls qux (200; 5.850715ms)
Jul  5 13:14:05.763: INFO: (18) /api/v1/namespaces/proxy-4455/pods/https:proxy-service-5djk7-6mxlx:460/proxy/: tls baz (200; 5.853535ms)
Jul  5 13:14:05.764: INFO: (18) /api/v1/namespaces/proxy-4455/services/proxy-service-5djk7:portname2/proxy/: bar (200; 7.235739ms)
Jul  5 13:14:05.764: INFO: (18) /api/v1/namespaces/proxy-4455/services/http:proxy-service-5djk7:portname1/proxy/: foo (200; 7.354961ms)
Jul  5 13:14:05.765: INFO: (18) /api/v1/namespaces/proxy-4455/services/proxy-service-5djk7:portname1/proxy/: foo (200; 7.642726ms)
Jul  5 13:14:05.765: INFO: (18) /api/v1/namespaces/proxy-4455/services/http:proxy-service-5djk7:portname2/proxy/: bar (200; 7.87841ms)
Jul  5 13:14:05.765: INFO: (18) /api/v1/namespaces/proxy-4455/services/https:proxy-service-5djk7:tlsportname2/proxy/: tls qux (200; 7.893939ms)
Jul  5 13:14:05.765: INFO: (18) /api/v1/namespaces/proxy-4455/services/https:proxy-service-5djk7:tlsportname1/proxy/: tls baz (200; 8.133569ms)
Jul  5 13:14:05.775: INFO: (19) /api/v1/namespaces/proxy-4455/pods/proxy-service-5djk7-6mxlx:160/proxy/: foo (200; 9.952581ms)
Jul  5 13:14:05.775: INFO: (19) /api/v1/namespaces/proxy-4455/pods/proxy-service-5djk7-6mxlx:1080/proxy/: test<... (200; 9.983865ms)
Jul  5 13:14:05.777: INFO: (19) /api/v1/namespaces/proxy-4455/pods/proxy-service-5djk7-6mxlx/proxy/: test (200; 12.121115ms)
Jul  5 13:14:05.777: INFO: (19) /api/v1/namespaces/proxy-4455/pods/http:proxy-service-5djk7-6mxlx:1080/proxy/: ... (200; 12.08774ms)
Jul  5 13:14:05.777: INFO: (19) /api/v1/namespaces/proxy-4455/pods/https:proxy-service-5djk7-6mxlx:460/proxy/: tls baz (200; 12.28706ms)
Jul  5 13:14:05.778: INFO: (19) /api/v1/namespaces/proxy-4455/pods/proxy-service-5djk7-6mxlx:162/proxy/: bar (200; 12.339024ms)
Jul  5 13:14:05.778: INFO: (19) /api/v1/namespaces/proxy-4455/pods/https:proxy-service-5djk7-6mxlx:462/proxy/: tls qux (200; 12.654303ms)
Jul  5 13:14:05.778: INFO: (19) /api/v1/namespaces/proxy-4455/services/http:proxy-service-5djk7:portname2/proxy/: bar (200; 12.683716ms)
Jul  5 13:14:05.778: INFO: (19) /api/v1/namespaces/proxy-4455/pods/http:proxy-service-5djk7-6mxlx:162/proxy/: bar (200; 12.6223ms)
Jul  5 13:14:05.778: INFO: (19) /api/v1/namespaces/proxy-4455/services/proxy-service-5djk7:portname2/proxy/: bar (200; 12.762413ms)
Jul  5 13:14:05.778: INFO: (19) /api/v1/namespaces/proxy-4455/pods/http:proxy-service-5djk7-6mxlx:160/proxy/: foo (200; 12.698194ms)
Jul  5 13:14:05.778: INFO: (19) /api/v1/namespaces/proxy-4455/services/proxy-service-5djk7:portname1/proxy/: foo (200; 12.905877ms)
Jul  5 13:14:05.779: INFO: (19) /api/v1/namespaces/proxy-4455/services/http:proxy-service-5djk7:portname1/proxy/: foo (200; 14.05705ms)
Jul  5 13:14:05.779: INFO: (19) /api/v1/namespaces/proxy-4455/services/https:proxy-service-5djk7:tlsportname1/proxy/: tls baz (200; 14.208445ms)
Jul  5 13:14:05.779: INFO: (19) /api/v1/namespaces/proxy-4455/services/https:proxy-service-5djk7:tlsportname2/proxy/: tls qux (200; 14.261568ms)
Jul  5 13:14:05.779: INFO: (19) /api/v1/namespaces/proxy-4455/pods/https:proxy-service-5djk7-6mxlx:443/proxy/: >> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0705 13:14:23.199341       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jul  5 13:14:23.199: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:14:23.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-5206" for this suite.
Jul  5 13:14:29.307: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:14:29.386: INFO: namespace gc-5206 deletion completed in 6.183618206s

• [SLOW TEST:7.287 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:14:29.386: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jul  5 13:14:29.428: INFO: Waiting up to 5m0s for pod "downwardapi-volume-24ac4d8e-7ba4-41c7-bdd6-adc186bd7c80" in namespace "projected-392" to be "success or failure"
Jul  5 13:14:29.448: INFO: Pod "downwardapi-volume-24ac4d8e-7ba4-41c7-bdd6-adc186bd7c80": Phase="Pending", Reason="", readiness=false. Elapsed: 19.9632ms
Jul  5 13:14:31.453: INFO: Pod "downwardapi-volume-24ac4d8e-7ba4-41c7-bdd6-adc186bd7c80": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025076553s
Jul  5 13:14:33.457: INFO: Pod "downwardapi-volume-24ac4d8e-7ba4-41c7-bdd6-adc186bd7c80": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029202325s
STEP: Saw pod success
Jul  5 13:14:33.457: INFO: Pod "downwardapi-volume-24ac4d8e-7ba4-41c7-bdd6-adc186bd7c80" satisfied condition "success or failure"
Jul  5 13:14:33.460: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-24ac4d8e-7ba4-41c7-bdd6-adc186bd7c80 container client-container: 
STEP: delete the pod
Jul  5 13:14:33.475: INFO: Waiting for pod downwardapi-volume-24ac4d8e-7ba4-41c7-bdd6-adc186bd7c80 to disappear
Jul  5 13:14:33.543: INFO: Pod downwardapi-volume-24ac4d8e-7ba4-41c7-bdd6-adc186bd7c80 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:14:33.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-392" for this suite.
Jul  5 13:14:39.579: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:14:39.657: INFO: namespace projected-392 deletion completed in 6.110137187s

• [SLOW TEST:10.271 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:14:39.658: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-configmap-rcng
STEP: Creating a pod to test atomic-volume-subpath
Jul  5 13:14:39.767: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-rcng" in namespace "subpath-9548" to be "success or failure"
Jul  5 13:14:39.787: INFO: Pod "pod-subpath-test-configmap-rcng": Phase="Pending", Reason="", readiness=false. Elapsed: 19.437187ms
Jul  5 13:14:41.896: INFO: Pod "pod-subpath-test-configmap-rcng": Phase="Pending", Reason="", readiness=false. Elapsed: 2.129037171s
Jul  5 13:14:43.900: INFO: Pod "pod-subpath-test-configmap-rcng": Phase="Running", Reason="", readiness=true. Elapsed: 4.133144475s
Jul  5 13:14:45.904: INFO: Pod "pod-subpath-test-configmap-rcng": Phase="Running", Reason="", readiness=true. Elapsed: 6.136683891s
Jul  5 13:14:47.907: INFO: Pod "pod-subpath-test-configmap-rcng": Phase="Running", Reason="", readiness=true. Elapsed: 8.140210957s
Jul  5 13:14:49.912: INFO: Pod "pod-subpath-test-configmap-rcng": Phase="Running", Reason="", readiness=true. Elapsed: 10.144557721s
Jul  5 13:14:51.916: INFO: Pod "pod-subpath-test-configmap-rcng": Phase="Running", Reason="", readiness=true. Elapsed: 12.148634375s
Jul  5 13:14:53.921: INFO: Pod "pod-subpath-test-configmap-rcng": Phase="Running", Reason="", readiness=true. Elapsed: 14.153394339s
Jul  5 13:14:55.929: INFO: Pod "pod-subpath-test-configmap-rcng": Phase="Running", Reason="", readiness=true. Elapsed: 16.161411491s
Jul  5 13:14:57.933: INFO: Pod "pod-subpath-test-configmap-rcng": Phase="Running", Reason="", readiness=true. Elapsed: 18.165633884s
Jul  5 13:14:59.938: INFO: Pod "pod-subpath-test-configmap-rcng": Phase="Running", Reason="", readiness=true. Elapsed: 20.170607761s
Jul  5 13:15:01.942: INFO: Pod "pod-subpath-test-configmap-rcng": Phase="Running", Reason="", readiness=true. Elapsed: 22.174671324s
Jul  5 13:15:03.947: INFO: Pod "pod-subpath-test-configmap-rcng": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.17978661s
STEP: Saw pod success
Jul  5 13:15:03.947: INFO: Pod "pod-subpath-test-configmap-rcng" satisfied condition "success or failure"
Jul  5 13:15:03.951: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-configmap-rcng container test-container-subpath-configmap-rcng: 
STEP: delete the pod
Jul  5 13:15:03.971: INFO: Waiting for pod pod-subpath-test-configmap-rcng to disappear
Jul  5 13:15:03.975: INFO: Pod pod-subpath-test-configmap-rcng no longer exists
STEP: Deleting pod pod-subpath-test-configmap-rcng
Jul  5 13:15:03.976: INFO: Deleting pod "pod-subpath-test-configmap-rcng" in namespace "subpath-9548"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:15:03.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-9548" for this suite.
Jul  5 13:15:09.991: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:15:10.074: INFO: namespace subpath-9548 deletion completed in 6.093281085s

• [SLOW TEST:30.417 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:15:10.075: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:15:16.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-1341" for this suite.
Jul  5 13:15:22.543: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:15:22.628: INFO: namespace namespaces-1341 deletion completed in 6.102039821s
STEP: Destroying namespace "nsdeletetest-9345" for this suite.
Jul  5 13:15:22.630: INFO: Namespace nsdeletetest-9345 was already deleted
STEP: Destroying namespace "nsdeletetest-5126" for this suite.
Jul  5 13:15:28.660: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:15:28.759: INFO: namespace nsdeletetest-5126 deletion completed in 6.128966514s

• [SLOW TEST:18.685 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:15:28.760: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jul  5 13:15:28.846: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f13f54be-926d-431a-af64-2c1c05f11faa" in namespace "projected-4422" to be "success or failure"
Jul  5 13:15:28.850: INFO: Pod "downwardapi-volume-f13f54be-926d-431a-af64-2c1c05f11faa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.411268ms
Jul  5 13:15:31.383: INFO: Pod "downwardapi-volume-f13f54be-926d-431a-af64-2c1c05f11faa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.537058807s
Jul  5 13:15:33.387: INFO: Pod "downwardapi-volume-f13f54be-926d-431a-af64-2c1c05f11faa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.541704917s
STEP: Saw pod success
Jul  5 13:15:33.388: INFO: Pod "downwardapi-volume-f13f54be-926d-431a-af64-2c1c05f11faa" satisfied condition "success or failure"
Jul  5 13:15:33.391: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-f13f54be-926d-431a-af64-2c1c05f11faa container client-container: 
STEP: delete the pod
Jul  5 13:15:33.434: INFO: Waiting for pod downwardapi-volume-f13f54be-926d-431a-af64-2c1c05f11faa to disappear
Jul  5 13:15:33.438: INFO: Pod downwardapi-volume-f13f54be-926d-431a-af64-2c1c05f11faa no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:15:33.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4422" for this suite.
Jul  5 13:15:39.454: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:15:39.552: INFO: namespace projected-4422 deletion completed in 6.110775417s

• [SLOW TEST:10.792 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:15:39.553: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-7882
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jul  5 13:15:39.590: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jul  5 13:16:03.787: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.80:8080/dial?request=hostName&protocol=udp&host=10.244.2.124&port=8081&tries=1'] Namespace:pod-network-test-7882 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  5 13:16:03.787: INFO: >>> kubeConfig: /root/.kube/config
I0705 13:16:03.823982       6 log.go:172] (0xc0000edef0) (0xc002890640) Create stream
I0705 13:16:03.824016       6 log.go:172] (0xc0000edef0) (0xc002890640) Stream added, broadcasting: 1
I0705 13:16:03.830487       6 log.go:172] (0xc0000edef0) Reply frame received for 1
I0705 13:16:03.830539       6 log.go:172] (0xc0000edef0) (0xc0028906e0) Create stream
I0705 13:16:03.830557       6 log.go:172] (0xc0000edef0) (0xc0028906e0) Stream added, broadcasting: 3
I0705 13:16:03.834069       6 log.go:172] (0xc0000edef0) Reply frame received for 3
I0705 13:16:03.834122       6 log.go:172] (0xc0000edef0) (0xc002a82fa0) Create stream
I0705 13:16:03.834143       6 log.go:172] (0xc0000edef0) (0xc002a82fa0) Stream added, broadcasting: 5
I0705 13:16:03.834991       6 log.go:172] (0xc0000edef0) Reply frame received for 5
I0705 13:16:03.930284       6 log.go:172] (0xc0000edef0) Data frame received for 3
I0705 13:16:03.930310       6 log.go:172] (0xc0028906e0) (3) Data frame handling
I0705 13:16:03.930325       6 log.go:172] (0xc0028906e0) (3) Data frame sent
I0705 13:16:03.930854       6 log.go:172] (0xc0000edef0) Data frame received for 5
I0705 13:16:03.930877       6 log.go:172] (0xc002a82fa0) (5) Data frame handling
I0705 13:16:03.931171       6 log.go:172] (0xc0000edef0) Data frame received for 3
I0705 13:16:03.931211       6 log.go:172] (0xc0028906e0) (3) Data frame handling
I0705 13:16:03.932857       6 log.go:172] (0xc0000edef0) Data frame received for 1
I0705 13:16:03.932882       6 log.go:172] (0xc002890640) (1) Data frame handling
I0705 13:16:03.932897       6 log.go:172] (0xc002890640) (1) Data frame sent
I0705 13:16:03.932912       6 log.go:172] (0xc0000edef0) (0xc002890640) Stream removed, broadcasting: 1
I0705 13:16:03.932980       6 log.go:172] (0xc0000edef0) (0xc002890640) Stream removed, broadcasting: 1
I0705 13:16:03.932996       6 log.go:172] (0xc0000edef0) (0xc0028906e0) Stream removed, broadcasting: 3
I0705 13:16:03.933019       6 log.go:172] (0xc0000edef0) (0xc002a82fa0) Stream removed, broadcasting: 5
I0705 13:16:03.933097       6 log.go:172] (0xc0000edef0) Go away received
Jul  5 13:16:03.933: INFO: Waiting for endpoints: map[]
Jul  5 13:16:03.936: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.80:8080/dial?request=hostName&protocol=udp&host=10.244.1.79&port=8081&tries=1'] Namespace:pod-network-test-7882 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  5 13:16:03.936: INFO: >>> kubeConfig: /root/.kube/config
I0705 13:16:03.973709       6 log.go:172] (0xc000fd0bb0) (0xc0030203c0) Create stream
I0705 13:16:03.973741       6 log.go:172] (0xc000fd0bb0) (0xc0030203c0) Stream added, broadcasting: 1
I0705 13:16:03.976051       6 log.go:172] (0xc000fd0bb0) Reply frame received for 1
I0705 13:16:03.976091       6 log.go:172] (0xc000fd0bb0) (0xc002890780) Create stream
I0705 13:16:03.976102       6 log.go:172] (0xc000fd0bb0) (0xc002890780) Stream added, broadcasting: 3
I0705 13:16:03.977097       6 log.go:172] (0xc000fd0bb0) Reply frame received for 3
I0705 13:16:03.977304       6 log.go:172] (0xc000fd0bb0) (0xc003020460) Create stream
I0705 13:16:03.977328       6 log.go:172] (0xc000fd0bb0) (0xc003020460) Stream added, broadcasting: 5
I0705 13:16:03.978637       6 log.go:172] (0xc000fd0bb0) Reply frame received for 5
I0705 13:16:04.049673       6 log.go:172] (0xc000fd0bb0) Data frame received for 3
I0705 13:16:04.049705       6 log.go:172] (0xc002890780) (3) Data frame handling
I0705 13:16:04.049720       6 log.go:172] (0xc002890780) (3) Data frame sent
I0705 13:16:04.050190       6 log.go:172] (0xc000fd0bb0) Data frame received for 3
I0705 13:16:04.050208       6 log.go:172] (0xc002890780) (3) Data frame handling
I0705 13:16:04.050429       6 log.go:172] (0xc000fd0bb0) Data frame received for 5
I0705 13:16:04.050446       6 log.go:172] (0xc003020460) (5) Data frame handling
I0705 13:16:04.051937       6 log.go:172] (0xc000fd0bb0) Data frame received for 1
I0705 13:16:04.051971       6 log.go:172] (0xc0030203c0) (1) Data frame handling
I0705 13:16:04.052003       6 log.go:172] (0xc0030203c0) (1) Data frame sent
I0705 13:16:04.052023       6 log.go:172] (0xc000fd0bb0) (0xc0030203c0) Stream removed, broadcasting: 1
I0705 13:16:04.052113       6 log.go:172] (0xc000fd0bb0) Go away received
I0705 13:16:04.052291       6 log.go:172] (0xc000fd0bb0) (0xc0030203c0) Stream removed, broadcasting: 1
I0705 13:16:04.052346       6 log.go:172] (0xc000fd0bb0) (0xc002890780) Stream removed, broadcasting: 3
I0705 13:16:04.052369       6 log.go:172] (0xc000fd0bb0) (0xc003020460) Stream removed, broadcasting: 5
Jul  5 13:16:04.052: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:16:04.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-7882" for this suite.
Jul  5 13:16:26.077: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:16:26.156: INFO: namespace pod-network-test-7882 deletion completed in 22.098516937s

• [SLOW TEST:46.603 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:16:26.157: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:16:26.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-7097" for this suite.
Jul  5 13:16:32.443: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:16:32.520: INFO: namespace kubelet-test-7097 deletion completed in 6.108795398s

• [SLOW TEST:6.363 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:16:32.520: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating replication controller my-hostname-basic-da0c6822-5b68-4671-9c93-e5a8a3cb912d
Jul  5 13:16:32.618: INFO: Pod name my-hostname-basic-da0c6822-5b68-4671-9c93-e5a8a3cb912d: Found 0 pods out of 1
Jul  5 13:16:37.623: INFO: Pod name my-hostname-basic-da0c6822-5b68-4671-9c93-e5a8a3cb912d: Found 1 pods out of 1
Jul  5 13:16:37.623: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-da0c6822-5b68-4671-9c93-e5a8a3cb912d" are running
Jul  5 13:16:37.626: INFO: Pod "my-hostname-basic-da0c6822-5b68-4671-9c93-e5a8a3cb912d-5lt7x" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-05 13:16:32 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-05 13:16:35 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-05 13:16:35 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-05 13:16:32 +0000 UTC Reason: Message:}])
Jul  5 13:16:37.626: INFO: Trying to dial the pod
Jul  5 13:16:42.639: INFO: Controller my-hostname-basic-da0c6822-5b68-4671-9c93-e5a8a3cb912d: Got expected result from replica 1 [my-hostname-basic-da0c6822-5b68-4671-9c93-e5a8a3cb912d-5lt7x]: "my-hostname-basic-da0c6822-5b68-4671-9c93-e5a8a3cb912d-5lt7x", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:16:42.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-2268" for this suite.
Jul  5 13:16:48.658: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:16:48.742: INFO: namespace replication-controller-2268 deletion completed in 6.098192317s

• [SLOW TEST:16.222 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:16:48.742: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jul  5 13:16:52.867: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:16:52.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-2579" for this suite.
Jul  5 13:16:58.934: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:16:59.014: INFO: namespace container-runtime-2579 deletion completed in 6.095408194s

• [SLOW TEST:10.272 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:16:59.014: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-8e05054b-1c02-4708-9c6c-179875be3316
STEP: Creating a pod to test consume configMaps
Jul  5 13:16:59.112: INFO: Waiting up to 5m0s for pod "pod-configmaps-a7d0d2ee-3014-46b0-a17d-e9302195caff" in namespace "configmap-3645" to be "success or failure"
Jul  5 13:16:59.116: INFO: Pod "pod-configmaps-a7d0d2ee-3014-46b0-a17d-e9302195caff": Phase="Pending", Reason="", readiness=false. Elapsed: 3.895941ms
Jul  5 13:17:01.120: INFO: Pod "pod-configmaps-a7d0d2ee-3014-46b0-a17d-e9302195caff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007981223s
Jul  5 13:17:03.124: INFO: Pod "pod-configmaps-a7d0d2ee-3014-46b0-a17d-e9302195caff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011696795s
STEP: Saw pod success
Jul  5 13:17:03.124: INFO: Pod "pod-configmaps-a7d0d2ee-3014-46b0-a17d-e9302195caff" satisfied condition "success or failure"
Jul  5 13:17:03.126: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-a7d0d2ee-3014-46b0-a17d-e9302195caff container configmap-volume-test: 
STEP: delete the pod
Jul  5 13:17:03.141: INFO: Waiting for pod pod-configmaps-a7d0d2ee-3014-46b0-a17d-e9302195caff to disappear
Jul  5 13:17:03.152: INFO: Pod pod-configmaps-a7d0d2ee-3014-46b0-a17d-e9302195caff no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:17:03.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3645" for this suite.
Jul  5 13:17:09.167: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:17:09.302: INFO: namespace configmap-3645 deletion completed in 6.146320495s

• [SLOW TEST:10.288 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:17:09.303: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Jul  5 13:17:09.334: INFO: Waiting up to 5m0s for pod "downward-api-72e0828f-1024-4896-b892-fe31250594c6" in namespace "downward-api-746" to be "success or failure"
Jul  5 13:17:09.394: INFO: Pod "downward-api-72e0828f-1024-4896-b892-fe31250594c6": Phase="Pending", Reason="", readiness=false. Elapsed: 60.054745ms
Jul  5 13:17:11.399: INFO: Pod "downward-api-72e0828f-1024-4896-b892-fe31250594c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064889057s
Jul  5 13:17:13.403: INFO: Pod "downward-api-72e0828f-1024-4896-b892-fe31250594c6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.069519298s
STEP: Saw pod success
Jul  5 13:17:13.403: INFO: Pod "downward-api-72e0828f-1024-4896-b892-fe31250594c6" satisfied condition "success or failure"
Jul  5 13:17:13.406: INFO: Trying to get logs from node iruya-worker2 pod downward-api-72e0828f-1024-4896-b892-fe31250594c6 container dapi-container: 
STEP: delete the pod
Jul  5 13:17:13.442: INFO: Waiting for pod downward-api-72e0828f-1024-4896-b892-fe31250594c6 to disappear
Jul  5 13:17:13.447: INFO: Pod downward-api-72e0828f-1024-4896-b892-fe31250594c6 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:17:13.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-746" for this suite.
Jul  5 13:17:19.480: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:17:19.565: INFO: namespace downward-api-746 deletion completed in 6.112878534s

• [SLOW TEST:10.262 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:17:19.566: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jul  5 13:17:19.670: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 13:17:19.679: INFO: Number of nodes with available pods: 0
Jul  5 13:17:19.679: INFO: Node iruya-worker is running more than one daemon pod
Jul  5 13:17:20.685: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 13:17:20.688: INFO: Number of nodes with available pods: 0
Jul  5 13:17:20.688: INFO: Node iruya-worker is running more than one daemon pod
Jul  5 13:17:21.740: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 13:17:21.744: INFO: Number of nodes with available pods: 0
Jul  5 13:17:21.744: INFO: Node iruya-worker is running more than one daemon pod
Jul  5 13:17:22.684: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 13:17:22.688: INFO: Number of nodes with available pods: 0
Jul  5 13:17:22.688: INFO: Node iruya-worker is running more than one daemon pod
Jul  5 13:17:23.720: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 13:17:23.724: INFO: Number of nodes with available pods: 0
Jul  5 13:17:23.724: INFO: Node iruya-worker is running more than one daemon pod
Jul  5 13:17:24.684: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 13:17:24.688: INFO: Number of nodes with available pods: 2
Jul  5 13:17:24.688: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Jul  5 13:17:24.716: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 13:17:24.748: INFO: Number of nodes with available pods: 1
Jul  5 13:17:24.748: INFO: Node iruya-worker is running more than one daemon pod
Jul  5 13:17:25.754: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 13:17:25.757: INFO: Number of nodes with available pods: 1
Jul  5 13:17:25.757: INFO: Node iruya-worker is running more than one daemon pod
Jul  5 13:17:26.752: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 13:17:26.755: INFO: Number of nodes with available pods: 1
Jul  5 13:17:26.755: INFO: Node iruya-worker is running more than one daemon pod
Jul  5 13:17:27.755: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 13:17:27.761: INFO: Number of nodes with available pods: 1
Jul  5 13:17:27.762: INFO: Node iruya-worker is running more than one daemon pod
Jul  5 13:17:28.754: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 13:17:28.757: INFO: Number of nodes with available pods: 1
Jul  5 13:17:28.757: INFO: Node iruya-worker is running more than one daemon pod
Jul  5 13:17:29.753: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 13:17:29.756: INFO: Number of nodes with available pods: 1
Jul  5 13:17:29.756: INFO: Node iruya-worker is running more than one daemon pod
Jul  5 13:17:30.752: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 13:17:30.755: INFO: Number of nodes with available pods: 1
Jul  5 13:17:30.755: INFO: Node iruya-worker is running more than one daemon pod
Jul  5 13:17:31.752: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 13:17:31.755: INFO: Number of nodes with available pods: 1
Jul  5 13:17:31.755: INFO: Node iruya-worker is running more than one daemon pod
Jul  5 13:17:32.753: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 13:17:32.757: INFO: Number of nodes with available pods: 1
Jul  5 13:17:32.757: INFO: Node iruya-worker is running more than one daemon pod
Jul  5 13:17:33.752: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 13:17:33.754: INFO: Number of nodes with available pods: 1
Jul  5 13:17:33.754: INFO: Node iruya-worker is running more than one daemon pod
Jul  5 13:17:34.760: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 13:17:34.762: INFO: Number of nodes with available pods: 1
Jul  5 13:17:34.762: INFO: Node iruya-worker is running more than one daemon pod
Jul  5 13:17:35.753: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 13:17:35.757: INFO: Number of nodes with available pods: 1
Jul  5 13:17:35.757: INFO: Node iruya-worker is running more than one daemon pod
Jul  5 13:17:36.753: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 13:17:36.755: INFO: Number of nodes with available pods: 1
Jul  5 13:17:36.755: INFO: Node iruya-worker is running more than one daemon pod
Jul  5 13:17:37.752: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 13:17:37.755: INFO: Number of nodes with available pods: 1
Jul  5 13:17:37.755: INFO: Node iruya-worker is running more than one daemon pod
Jul  5 13:17:38.754: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 13:17:38.757: INFO: Number of nodes with available pods: 1
Jul  5 13:17:38.757: INFO: Node iruya-worker is running more than one daemon pod
Jul  5 13:17:39.753: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 13:17:39.756: INFO: Number of nodes with available pods: 2
Jul  5 13:17:39.756: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3725, will wait for the garbage collector to delete the pods
Jul  5 13:17:39.820: INFO: Deleting DaemonSet.extensions daemon-set took: 6.611635ms
Jul  5 13:17:40.120: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.246655ms
Jul  5 13:17:46.024: INFO: Number of nodes with available pods: 0
Jul  5 13:17:46.024: INFO: Number of running nodes: 0, number of available pods: 0
Jul  5 13:17:46.029: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3725/daemonsets","resourceVersion":"232765"},"items":null}

Jul  5 13:17:46.032: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3725/pods","resourceVersion":"232765"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:17:46.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-3725" for this suite.
Jul  5 13:17:52.061: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:17:52.143: INFO: namespace daemonsets-3725 deletion completed in 6.099246596s

• [SLOW TEST:32.578 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:17:52.144: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-8363572c-b9f0-435e-955f-5bdbd063b192
STEP: Creating a pod to test consume secrets
Jul  5 13:17:52.234: INFO: Waiting up to 5m0s for pod "pod-secrets-4e16f473-dd0d-44ce-a807-f067671d4c02" in namespace "secrets-5558" to be "success or failure"
Jul  5 13:17:52.263: INFO: Pod "pod-secrets-4e16f473-dd0d-44ce-a807-f067671d4c02": Phase="Pending", Reason="", readiness=false. Elapsed: 28.812617ms
Jul  5 13:17:54.340: INFO: Pod "pod-secrets-4e16f473-dd0d-44ce-a807-f067671d4c02": Phase="Pending", Reason="", readiness=false. Elapsed: 2.106306871s
Jul  5 13:17:56.344: INFO: Pod "pod-secrets-4e16f473-dd0d-44ce-a807-f067671d4c02": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.110370236s
STEP: Saw pod success
Jul  5 13:17:56.344: INFO: Pod "pod-secrets-4e16f473-dd0d-44ce-a807-f067671d4c02" satisfied condition "success or failure"
Jul  5 13:17:56.348: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-4e16f473-dd0d-44ce-a807-f067671d4c02 container secret-env-test: 
STEP: delete the pod
Jul  5 13:17:56.384: INFO: Waiting for pod pod-secrets-4e16f473-dd0d-44ce-a807-f067671d4c02 to disappear
Jul  5 13:17:56.424: INFO: Pod pod-secrets-4e16f473-dd0d-44ce-a807-f067671d4c02 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:17:56.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5558" for this suite.
Jul  5 13:18:02.444: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:18:02.522: INFO: namespace secrets-5558 deletion completed in 6.09301696s

• [SLOW TEST:10.378 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:18:02.522: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jul  5 13:18:06.736: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:18:06.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-1421" for this suite.
Jul  5 13:18:12.888: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:18:12.963: INFO: namespace container-runtime-1421 deletion completed in 6.100276138s

• [SLOW TEST:10.441 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:18:12.964: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Jul  5 13:18:13.036: INFO: Waiting up to 5m0s for pod "pod-0c44b78e-2370-4bf4-a3a8-3068f8d273b7" in namespace "emptydir-4138" to be "success or failure"
Jul  5 13:18:13.107: INFO: Pod "pod-0c44b78e-2370-4bf4-a3a8-3068f8d273b7": Phase="Pending", Reason="", readiness=false. Elapsed: 70.820418ms
Jul  5 13:18:15.111: INFO: Pod "pod-0c44b78e-2370-4bf4-a3a8-3068f8d273b7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074166491s
Jul  5 13:18:17.115: INFO: Pod "pod-0c44b78e-2370-4bf4-a3a8-3068f8d273b7": Phase="Running", Reason="", readiness=true. Elapsed: 4.078670686s
Jul  5 13:18:19.119: INFO: Pod "pod-0c44b78e-2370-4bf4-a3a8-3068f8d273b7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.082872817s
STEP: Saw pod success
Jul  5 13:18:19.119: INFO: Pod "pod-0c44b78e-2370-4bf4-a3a8-3068f8d273b7" satisfied condition "success or failure"
Jul  5 13:18:19.130: INFO: Trying to get logs from node iruya-worker pod pod-0c44b78e-2370-4bf4-a3a8-3068f8d273b7 container test-container: 
STEP: delete the pod
Jul  5 13:18:19.149: INFO: Waiting for pod pod-0c44b78e-2370-4bf4-a3a8-3068f8d273b7 to disappear
Jul  5 13:18:19.153: INFO: Pod pod-0c44b78e-2370-4bf4-a3a8-3068f8d273b7 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:18:19.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4138" for this suite.
Jul  5 13:18:25.194: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:18:25.265: INFO: namespace emptydir-4138 deletion completed in 6.107781117s

• [SLOW TEST:12.301 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:18:25.265: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jul  5 13:18:25.541: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3e977d91-17dc-457e-8847-7f5d08832129" in namespace "downward-api-8953" to be "success or failure"
Jul  5 13:18:25.549: INFO: Pod "downwardapi-volume-3e977d91-17dc-457e-8847-7f5d08832129": Phase="Pending", Reason="", readiness=false. Elapsed: 7.425428ms
Jul  5 13:18:27.552: INFO: Pod "downwardapi-volume-3e977d91-17dc-457e-8847-7f5d08832129": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010846989s
Jul  5 13:18:29.558: INFO: Pod "downwardapi-volume-3e977d91-17dc-457e-8847-7f5d08832129": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016786067s
STEP: Saw pod success
Jul  5 13:18:29.558: INFO: Pod "downwardapi-volume-3e977d91-17dc-457e-8847-7f5d08832129" satisfied condition "success or failure"
Jul  5 13:18:29.562: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-3e977d91-17dc-457e-8847-7f5d08832129 container client-container: 
STEP: delete the pod
Jul  5 13:18:29.579: INFO: Waiting for pod downwardapi-volume-3e977d91-17dc-457e-8847-7f5d08832129 to disappear
Jul  5 13:18:29.584: INFO: Pod downwardapi-volume-3e977d91-17dc-457e-8847-7f5d08832129 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:18:29.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8953" for this suite.
Jul  5 13:18:35.600: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:18:35.709: INFO: namespace downward-api-8953 deletion completed in 6.12148916s

• [SLOW TEST:10.444 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:18:35.709: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-925642b4-22cc-4a19-b857-4580a300021c
STEP: Creating a pod to test consume secrets
Jul  5 13:18:35.826: INFO: Waiting up to 5m0s for pod "pod-secrets-f140eece-6475-467f-b832-f5501978d201" in namespace "secrets-7464" to be "success or failure"
Jul  5 13:18:35.836: INFO: Pod "pod-secrets-f140eece-6475-467f-b832-f5501978d201": Phase="Pending", Reason="", readiness=false. Elapsed: 10.246063ms
Jul  5 13:18:37.840: INFO: Pod "pod-secrets-f140eece-6475-467f-b832-f5501978d201": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013971705s
Jul  5 13:18:39.855: INFO: Pod "pod-secrets-f140eece-6475-467f-b832-f5501978d201": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029288383s
STEP: Saw pod success
Jul  5 13:18:39.855: INFO: Pod "pod-secrets-f140eece-6475-467f-b832-f5501978d201" satisfied condition "success or failure"
Jul  5 13:18:39.858: INFO: Trying to get logs from node iruya-worker pod pod-secrets-f140eece-6475-467f-b832-f5501978d201 container secret-volume-test: 
STEP: delete the pod
Jul  5 13:18:39.887: INFO: Waiting for pod pod-secrets-f140eece-6475-467f-b832-f5501978d201 to disappear
Jul  5 13:18:39.902: INFO: Pod pod-secrets-f140eece-6475-467f-b832-f5501978d201 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:18:39.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7464" for this suite.
Jul  5 13:18:45.918: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:18:45.987: INFO: namespace secrets-7464 deletion completed in 6.081794674s

• [SLOW TEST:10.278 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:18:45.987: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-5be4389a-faed-4f95-aa3a-1399b51c2c0c
STEP: Creating a pod to test consume secrets
Jul  5 13:18:46.073: INFO: Waiting up to 5m0s for pod "pod-secrets-622e93d8-aeef-4999-bda0-7dd77a657aa4" in namespace "secrets-7740" to be "success or failure"
Jul  5 13:18:46.091: INFO: Pod "pod-secrets-622e93d8-aeef-4999-bda0-7dd77a657aa4": Phase="Pending", Reason="", readiness=false. Elapsed: 18.791387ms
Jul  5 13:18:48.096: INFO: Pod "pod-secrets-622e93d8-aeef-4999-bda0-7dd77a657aa4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022953557s
Jul  5 13:18:50.100: INFO: Pod "pod-secrets-622e93d8-aeef-4999-bda0-7dd77a657aa4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027537751s
STEP: Saw pod success
Jul  5 13:18:50.100: INFO: Pod "pod-secrets-622e93d8-aeef-4999-bda0-7dd77a657aa4" satisfied condition "success or failure"
Jul  5 13:18:50.104: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-622e93d8-aeef-4999-bda0-7dd77a657aa4 container secret-volume-test: 
STEP: delete the pod
Jul  5 13:18:50.127: INFO: Waiting for pod pod-secrets-622e93d8-aeef-4999-bda0-7dd77a657aa4 to disappear
Jul  5 13:18:50.130: INFO: Pod pod-secrets-622e93d8-aeef-4999-bda0-7dd77a657aa4 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:18:50.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7740" for this suite.
Jul  5 13:18:56.146: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:18:56.227: INFO: namespace secrets-7740 deletion completed in 6.092977809s

• [SLOW TEST:10.240 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:18:56.228: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-0b95aa0f-569a-4c8f-94dd-a1d4f74021fb
STEP: Creating a pod to test consume secrets
Jul  5 13:18:56.305: INFO: Waiting up to 5m0s for pod "pod-secrets-59d8a754-a6d8-426d-ba42-2f6536eac8b7" in namespace "secrets-1235" to be "success or failure"
Jul  5 13:18:56.310: INFO: Pod "pod-secrets-59d8a754-a6d8-426d-ba42-2f6536eac8b7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.203017ms
Jul  5 13:18:58.313: INFO: Pod "pod-secrets-59d8a754-a6d8-426d-ba42-2f6536eac8b7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007896622s
Jul  5 13:19:00.317: INFO: Pod "pod-secrets-59d8a754-a6d8-426d-ba42-2f6536eac8b7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012095301s
STEP: Saw pod success
Jul  5 13:19:00.317: INFO: Pod "pod-secrets-59d8a754-a6d8-426d-ba42-2f6536eac8b7" satisfied condition "success or failure"
Jul  5 13:19:00.321: INFO: Trying to get logs from node iruya-worker pod pod-secrets-59d8a754-a6d8-426d-ba42-2f6536eac8b7 container secret-volume-test: 
STEP: delete the pod
Jul  5 13:19:00.341: INFO: Waiting for pod pod-secrets-59d8a754-a6d8-426d-ba42-2f6536eac8b7 to disappear
Jul  5 13:19:00.358: INFO: Pod pod-secrets-59d8a754-a6d8-426d-ba42-2f6536eac8b7 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:19:00.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1235" for this suite.
Jul  5 13:19:06.397: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:19:06.481: INFO: namespace secrets-1235 deletion completed in 6.11838167s

• [SLOW TEST:10.253 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:19:06.481: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-2830
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Jul  5 13:19:06.573: INFO: Found 0 stateful pods, waiting for 3
Jul  5 13:19:16.578: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jul  5 13:19:16.578: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jul  5 13:19:16.578: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jul  5 13:19:26.578: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jul  5 13:19:26.579: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jul  5 13:19:26.579: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Jul  5 13:19:26.608: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Jul  5 13:19:36.650: INFO: Updating stateful set ss2
Jul  5 13:19:36.658: INFO: Waiting for Pod statefulset-2830/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Jul  5 13:19:46.810: INFO: Found 2 stateful pods, waiting for 3
Jul  5 13:19:56.827: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jul  5 13:19:56.827: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jul  5 13:19:56.827: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Jul  5 13:19:56.929: INFO: Updating stateful set ss2
Jul  5 13:19:56.954: INFO: Waiting for Pod statefulset-2830/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jul  5 13:20:06.980: INFO: Updating stateful set ss2
Jul  5 13:20:07.013: INFO: Waiting for StatefulSet statefulset-2830/ss2 to complete update
Jul  5 13:20:07.013: INFO: Waiting for Pod statefulset-2830/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jul  5 13:20:17.019: INFO: Waiting for StatefulSet statefulset-2830/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Jul  5 13:20:27.020: INFO: Deleting all statefulset in ns statefulset-2830
Jul  5 13:20:27.023: INFO: Scaling statefulset ss2 to 0
Jul  5 13:20:57.041: INFO: Waiting for statefulset status.replicas updated to 0
Jul  5 13:20:57.044: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:20:57.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-2830" for this suite.
Jul  5 13:21:03.106: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:21:03.186: INFO: namespace statefulset-2830 deletion completed in 6.126044006s

• [SLOW TEST:116.705 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:21:03.186: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Jul  5 13:21:03.308: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-6898,SelfLink:/api/v1/namespaces/watch-6898/configmaps/e2e-watch-test-configmap-a,UID:a2336789-4288-43ad-88ee-04e7d583b030,ResourceVersion:233594,Generation:0,CreationTimestamp:2020-07-05 13:21:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jul  5 13:21:03.308: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-6898,SelfLink:/api/v1/namespaces/watch-6898/configmaps/e2e-watch-test-configmap-a,UID:a2336789-4288-43ad-88ee-04e7d583b030,ResourceVersion:233594,Generation:0,CreationTimestamp:2020-07-05 13:21:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Jul  5 13:21:13.317: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-6898,SelfLink:/api/v1/namespaces/watch-6898/configmaps/e2e-watch-test-configmap-a,UID:a2336789-4288-43ad-88ee-04e7d583b030,ResourceVersion:233614,Generation:0,CreationTimestamp:2020-07-05 13:21:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jul  5 13:21:13.317: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-6898,SelfLink:/api/v1/namespaces/watch-6898/configmaps/e2e-watch-test-configmap-a,UID:a2336789-4288-43ad-88ee-04e7d583b030,ResourceVersion:233614,Generation:0,CreationTimestamp:2020-07-05 13:21:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Jul  5 13:21:23.326: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-6898,SelfLink:/api/v1/namespaces/watch-6898/configmaps/e2e-watch-test-configmap-a,UID:a2336789-4288-43ad-88ee-04e7d583b030,ResourceVersion:233634,Generation:0,CreationTimestamp:2020-07-05 13:21:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jul  5 13:21:23.326: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-6898,SelfLink:/api/v1/namespaces/watch-6898/configmaps/e2e-watch-test-configmap-a,UID:a2336789-4288-43ad-88ee-04e7d583b030,ResourceVersion:233634,Generation:0,CreationTimestamp:2020-07-05 13:21:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Jul  5 13:21:33.333: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-6898,SelfLink:/api/v1/namespaces/watch-6898/configmaps/e2e-watch-test-configmap-a,UID:a2336789-4288-43ad-88ee-04e7d583b030,ResourceVersion:233654,Generation:0,CreationTimestamp:2020-07-05 13:21:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jul  5 13:21:33.333: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-6898,SelfLink:/api/v1/namespaces/watch-6898/configmaps/e2e-watch-test-configmap-a,UID:a2336789-4288-43ad-88ee-04e7d583b030,ResourceVersion:233654,Generation:0,CreationTimestamp:2020-07-05 13:21:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Jul  5 13:21:43.341: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-6898,SelfLink:/api/v1/namespaces/watch-6898/configmaps/e2e-watch-test-configmap-b,UID:91cd952d-7d83-4543-b9a8-cf2230915307,ResourceVersion:233676,Generation:0,CreationTimestamp:2020-07-05 13:21:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jul  5 13:21:43.341: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-6898,SelfLink:/api/v1/namespaces/watch-6898/configmaps/e2e-watch-test-configmap-b,UID:91cd952d-7d83-4543-b9a8-cf2230915307,ResourceVersion:233676,Generation:0,CreationTimestamp:2020-07-05 13:21:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Jul  5 13:21:53.348: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-6898,SelfLink:/api/v1/namespaces/watch-6898/configmaps/e2e-watch-test-configmap-b,UID:91cd952d-7d83-4543-b9a8-cf2230915307,ResourceVersion:233697,Generation:0,CreationTimestamp:2020-07-05 13:21:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jul  5 13:21:53.348: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-6898,SelfLink:/api/v1/namespaces/watch-6898/configmaps/e2e-watch-test-configmap-b,UID:91cd952d-7d83-4543-b9a8-cf2230915307,ResourceVersion:233697,Generation:0,CreationTimestamp:2020-07-05 13:21:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:22:03.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-6898" for this suite.
Jul  5 13:22:09.370: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:22:09.447: INFO: namespace watch-6898 deletion completed in 6.09249533s

• [SLOW TEST:66.261 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:22:09.447: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:22:13.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-4554" for this suite.
Jul  5 13:22:59.645: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:22:59.726: INFO: namespace kubelet-test-4554 deletion completed in 46.13818288s

• [SLOW TEST:50.280 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:22:59.727: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-7a90d8a3-a374-4f1e-8192-4548067db6e2
STEP: Creating a pod to test consume configMaps
Jul  5 13:22:59.815: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-536a115d-5bc1-4747-a0ea-dd71d228afb4" in namespace "projected-3585" to be "success or failure"
Jul  5 13:22:59.824: INFO: Pod "pod-projected-configmaps-536a115d-5bc1-4747-a0ea-dd71d228afb4": Phase="Pending", Reason="", readiness=false. Elapsed: 9.299257ms
Jul  5 13:23:01.828: INFO: Pod "pod-projected-configmaps-536a115d-5bc1-4747-a0ea-dd71d228afb4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013728477s
Jul  5 13:23:03.833: INFO: Pod "pod-projected-configmaps-536a115d-5bc1-4747-a0ea-dd71d228afb4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01776889s
STEP: Saw pod success
Jul  5 13:23:03.833: INFO: Pod "pod-projected-configmaps-536a115d-5bc1-4747-a0ea-dd71d228afb4" satisfied condition "success or failure"
Jul  5 13:23:03.835: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-536a115d-5bc1-4747-a0ea-dd71d228afb4 container projected-configmap-volume-test: 
STEP: delete the pod
Jul  5 13:23:03.889: INFO: Waiting for pod pod-projected-configmaps-536a115d-5bc1-4747-a0ea-dd71d228afb4 to disappear
Jul  5 13:23:03.918: INFO: Pod pod-projected-configmaps-536a115d-5bc1-4747-a0ea-dd71d228afb4 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:23:03.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3585" for this suite.
Jul  5 13:23:09.936: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:23:10.011: INFO: namespace projected-3585 deletion completed in 6.089026553s

• [SLOW TEST:10.285 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:23:10.012: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Jul  5 13:23:10.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3295'
Jul  5 13:23:15.114: INFO: stderr: ""
Jul  5 13:23:15.114: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Jul  5 13:23:16.119: INFO: Selector matched 1 pods for map[app:redis]
Jul  5 13:23:16.119: INFO: Found 0 / 1
Jul  5 13:23:17.139: INFO: Selector matched 1 pods for map[app:redis]
Jul  5 13:23:17.140: INFO: Found 0 / 1
Jul  5 13:23:18.119: INFO: Selector matched 1 pods for map[app:redis]
Jul  5 13:23:18.119: INFO: Found 1 / 1
Jul  5 13:23:18.119: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Jul  5 13:23:18.123: INFO: Selector matched 1 pods for map[app:redis]
Jul  5 13:23:18.123: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jul  5 13:23:18.123: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-njrxr --namespace=kubectl-3295 -p {"metadata":{"annotations":{"x":"y"}}}'
Jul  5 13:23:18.227: INFO: stderr: ""
Jul  5 13:23:18.227: INFO: stdout: "pod/redis-master-njrxr patched\n"
STEP: checking annotations
Jul  5 13:23:18.289: INFO: Selector matched 1 pods for map[app:redis]
Jul  5 13:23:18.289: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:23:18.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3295" for this suite.
Jul  5 13:23:40.311: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:23:40.402: INFO: namespace kubectl-3295 deletion completed in 22.108321663s

• [SLOW TEST:30.390 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:23:40.402: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test env composition
Jul  5 13:23:40.487: INFO: Waiting up to 5m0s for pod "var-expansion-055cd578-b3ba-49ba-956f-122af0c6ed25" in namespace "var-expansion-968" to be "success or failure"
Jul  5 13:23:40.516: INFO: Pod "var-expansion-055cd578-b3ba-49ba-956f-122af0c6ed25": Phase="Pending", Reason="", readiness=false. Elapsed: 28.332952ms
Jul  5 13:23:42.520: INFO: Pod "var-expansion-055cd578-b3ba-49ba-956f-122af0c6ed25": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032516794s
Jul  5 13:23:44.524: INFO: Pod "var-expansion-055cd578-b3ba-49ba-956f-122af0c6ed25": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036742723s
STEP: Saw pod success
Jul  5 13:23:44.524: INFO: Pod "var-expansion-055cd578-b3ba-49ba-956f-122af0c6ed25" satisfied condition "success or failure"
Jul  5 13:23:44.528: INFO: Trying to get logs from node iruya-worker pod var-expansion-055cd578-b3ba-49ba-956f-122af0c6ed25 container dapi-container: 
STEP: delete the pod
Jul  5 13:23:44.551: INFO: Waiting for pod var-expansion-055cd578-b3ba-49ba-956f-122af0c6ed25 to disappear
Jul  5 13:23:44.555: INFO: Pod var-expansion-055cd578-b3ba-49ba-956f-122af0c6ed25 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:23:44.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-968" for this suite.
Jul  5 13:23:50.571: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:23:50.654: INFO: namespace var-expansion-968 deletion completed in 6.096111924s

• [SLOW TEST:10.252 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:23:50.655: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's command
Jul  5 13:23:50.720: INFO: Waiting up to 5m0s for pod "var-expansion-78618518-622d-43cf-864d-0e0477f45d4f" in namespace "var-expansion-8581" to be "success or failure"
Jul  5 13:23:50.724: INFO: Pod "var-expansion-78618518-622d-43cf-864d-0e0477f45d4f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.779865ms
Jul  5 13:23:52.728: INFO: Pod "var-expansion-78618518-622d-43cf-864d-0e0477f45d4f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008308977s
Jul  5 13:23:54.732: INFO: Pod "var-expansion-78618518-622d-43cf-864d-0e0477f45d4f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011635509s
STEP: Saw pod success
Jul  5 13:23:54.732: INFO: Pod "var-expansion-78618518-622d-43cf-864d-0e0477f45d4f" satisfied condition "success or failure"
Jul  5 13:23:54.735: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-78618518-622d-43cf-864d-0e0477f45d4f container dapi-container: 
STEP: delete the pod
Jul  5 13:23:54.771: INFO: Waiting for pod var-expansion-78618518-622d-43cf-864d-0e0477f45d4f to disappear
Jul  5 13:23:54.787: INFO: Pod var-expansion-78618518-622d-43cf-864d-0e0477f45d4f no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:23:54.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-8581" for this suite.
Jul  5 13:24:02.803: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:24:02.879: INFO: namespace var-expansion-8581 deletion completed in 8.087719075s

• [SLOW TEST:12.224 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:24:02.879: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Jul  5 13:24:02.934: INFO: Waiting up to 5m0s for pod "downward-api-4971b2f1-3763-42f4-9231-b97b96537f6d" in namespace "downward-api-4742" to be "success or failure"
Jul  5 13:24:02.960: INFO: Pod "downward-api-4971b2f1-3763-42f4-9231-b97b96537f6d": Phase="Pending", Reason="", readiness=false. Elapsed: 26.156521ms
Jul  5 13:24:04.963: INFO: Pod "downward-api-4971b2f1-3763-42f4-9231-b97b96537f6d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029778869s
Jul  5 13:24:07.135: INFO: Pod "downward-api-4971b2f1-3763-42f4-9231-b97b96537f6d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.201097515s
STEP: Saw pod success
Jul  5 13:24:07.135: INFO: Pod "downward-api-4971b2f1-3763-42f4-9231-b97b96537f6d" satisfied condition "success or failure"
Jul  5 13:24:07.209: INFO: Trying to get logs from node iruya-worker pod downward-api-4971b2f1-3763-42f4-9231-b97b96537f6d container dapi-container: 
STEP: delete the pod
Jul  5 13:24:07.267: INFO: Waiting for pod downward-api-4971b2f1-3763-42f4-9231-b97b96537f6d to disappear
Jul  5 13:24:07.309: INFO: Pod downward-api-4971b2f1-3763-42f4-9231-b97b96537f6d no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:24:07.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4742" for this suite.
Jul  5 13:24:13.330: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:24:13.413: INFO: namespace downward-api-4742 deletion completed in 6.100411782s

• [SLOW TEST:10.534 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:24:13.413: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Jul  5 13:24:13.704: INFO: namespace kubectl-6996
Jul  5 13:24:13.704: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6996'
Jul  5 13:24:13.968: INFO: stderr: ""
Jul  5 13:24:13.968: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Jul  5 13:24:15.038: INFO: Selector matched 1 pods for map[app:redis]
Jul  5 13:24:15.038: INFO: Found 0 / 1
Jul  5 13:24:15.972: INFO: Selector matched 1 pods for map[app:redis]
Jul  5 13:24:15.972: INFO: Found 0 / 1
Jul  5 13:24:16.972: INFO: Selector matched 1 pods for map[app:redis]
Jul  5 13:24:16.972: INFO: Found 0 / 1
Jul  5 13:24:17.972: INFO: Selector matched 1 pods for map[app:redis]
Jul  5 13:24:17.972: INFO: Found 1 / 1
Jul  5 13:24:17.972: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jul  5 13:24:17.976: INFO: Selector matched 1 pods for map[app:redis]
Jul  5 13:24:17.976: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jul  5 13:24:17.976: INFO: wait on redis-master startup in kubectl-6996 
Jul  5 13:24:17.977: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-sngtg redis-master --namespace=kubectl-6996'
Jul  5 13:24:18.095: INFO: stderr: ""
Jul  5 13:24:18.095: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 05 Jul 13:24:16.891 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 05 Jul 13:24:16.891 # Server started, Redis version 3.2.12\n1:M 05 Jul 13:24:16.891 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 05 Jul 13:24:16.891 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Jul  5 13:24:18.095: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-6996'
Jul  5 13:24:18.243: INFO: stderr: ""
Jul  5 13:24:18.243: INFO: stdout: "service/rm2 exposed\n"
Jul  5 13:24:18.255: INFO: Service rm2 in namespace kubectl-6996 found.
STEP: exposing service
Jul  5 13:24:20.262: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-6996'
Jul  5 13:24:20.398: INFO: stderr: ""
Jul  5 13:24:20.398: INFO: stdout: "service/rm3 exposed\n"
Jul  5 13:24:20.427: INFO: Service rm3 in namespace kubectl-6996 found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:24:22.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6996" for this suite.
Jul  5 13:24:44.477: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:24:44.596: INFO: namespace kubectl-6996 deletion completed in 22.157094588s

• [SLOW TEST:31.182 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:24:44.597: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-projected-dmt6
STEP: Creating a pod to test atomic-volume-subpath
Jul  5 13:24:44.667: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-dmt6" in namespace "subpath-9363" to be "success or failure"
Jul  5 13:24:44.671: INFO: Pod "pod-subpath-test-projected-dmt6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.966341ms
Jul  5 13:24:46.675: INFO: Pod "pod-subpath-test-projected-dmt6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008416821s
Jul  5 13:24:48.679: INFO: Pod "pod-subpath-test-projected-dmt6": Phase="Running", Reason="", readiness=true. Elapsed: 4.012223402s
Jul  5 13:24:50.684: INFO: Pod "pod-subpath-test-projected-dmt6": Phase="Running", Reason="", readiness=true. Elapsed: 6.016678355s
Jul  5 13:24:52.688: INFO: Pod "pod-subpath-test-projected-dmt6": Phase="Running", Reason="", readiness=true. Elapsed: 8.021019061s
Jul  5 13:24:54.693: INFO: Pod "pod-subpath-test-projected-dmt6": Phase="Running", Reason="", readiness=true. Elapsed: 10.02552811s
Jul  5 13:24:56.697: INFO: Pod "pod-subpath-test-projected-dmt6": Phase="Running", Reason="", readiness=true. Elapsed: 12.029538825s
Jul  5 13:24:58.709: INFO: Pod "pod-subpath-test-projected-dmt6": Phase="Running", Reason="", readiness=true. Elapsed: 14.041543648s
Jul  5 13:25:00.713: INFO: Pod "pod-subpath-test-projected-dmt6": Phase="Running", Reason="", readiness=true. Elapsed: 16.046097512s
Jul  5 13:25:02.716: INFO: Pod "pod-subpath-test-projected-dmt6": Phase="Running", Reason="", readiness=true. Elapsed: 18.049395469s
Jul  5 13:25:04.719: INFO: Pod "pod-subpath-test-projected-dmt6": Phase="Running", Reason="", readiness=true. Elapsed: 20.05228753s
Jul  5 13:25:06.722: INFO: Pod "pod-subpath-test-projected-dmt6": Phase="Running", Reason="", readiness=true. Elapsed: 22.055438965s
Jul  5 13:25:08.727: INFO: Pod "pod-subpath-test-projected-dmt6": Phase="Running", Reason="", readiness=true. Elapsed: 24.059539383s
Jul  5 13:25:10.734: INFO: Pod "pod-subpath-test-projected-dmt6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.066514581s
STEP: Saw pod success
Jul  5 13:25:10.734: INFO: Pod "pod-subpath-test-projected-dmt6" satisfied condition "success or failure"
Jul  5 13:25:10.737: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-projected-dmt6 container test-container-subpath-projected-dmt6: 
STEP: delete the pod
Jul  5 13:25:10.773: INFO: Waiting for pod pod-subpath-test-projected-dmt6 to disappear
Jul  5 13:25:10.783: INFO: Pod pod-subpath-test-projected-dmt6 no longer exists
STEP: Deleting pod pod-subpath-test-projected-dmt6
Jul  5 13:25:10.783: INFO: Deleting pod "pod-subpath-test-projected-dmt6" in namespace "subpath-9363"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:25:10.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-9363" for this suite.
Jul  5 13:25:16.810: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:25:16.881: INFO: namespace subpath-9363 deletion completed in 6.09193747s

• [SLOW TEST:32.284 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:25:16.882: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-a117ee3c-2aa9-41ac-b49e-a1832af99f27
STEP: Creating secret with name s-test-opt-upd-471a04f7-a554-448b-a9fb-847fe3da3e3c
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-a117ee3c-2aa9-41ac-b49e-a1832af99f27
STEP: Updating secret s-test-opt-upd-471a04f7-a554-448b-a9fb-847fe3da3e3c
STEP: Creating secret with name s-test-opt-create-28c6d832-f9d5-4a20-8990-513a5cd5506f
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:26:39.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1536" for this suite.
Jul  5 13:27:01.940: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:27:02.015: INFO: namespace secrets-1536 deletion completed in 22.087221534s

• [SLOW TEST:105.133 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:27:02.015: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Jul  5 13:27:02.104: INFO: PodSpec: initContainers in spec.initContainers
Jul  5 13:27:53.562: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-0b36f192-003b-45f2-b71c-87cad440ed62", GenerateName:"", Namespace:"init-container-6895", SelfLink:"/api/v1/namespaces/init-container-6895/pods/pod-init-0b36f192-003b-45f2-b71c-87cad440ed62", UID:"65420948-0859-49a1-88b8-d099b6353397", ResourceVersion:"234698", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63729552422, loc:(*time.Location)(0x7eb18c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"104087644"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-5pz65", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc00192a540), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-5pz65", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-5pz65", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-5pz65", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002cdb2f8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0011c0f00), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002cdb550)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002cdb570)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002cdb578), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002cdb57c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729552422, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729552422, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729552422, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729552422, loc:(*time.Location)(0x7eb18c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.7", PodIP:"10.244.1.98", StartTime:(*v1.Time)(0xc000f34280), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc000f342c0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0024642a0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://b992767c09a2c109d27b3bb9ad24113fe5f01d2a040b85c545f1d5dab6e7fb6b"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000f342e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000f342a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:27:53.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-6895" for this suite.
Jul  5 13:28:15.672: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:28:15.769: INFO: namespace init-container-6895 deletion completed in 22.164535717s

• [SLOW TEST:73.754 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:28:15.770: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jul  5 13:28:15.842: INFO: Waiting up to 5m0s for pod "pod-9063108a-c043-4680-a970-20d7e4cc2761" in namespace "emptydir-2813" to be "success or failure"
Jul  5 13:28:15.845: INFO: Pod "pod-9063108a-c043-4680-a970-20d7e4cc2761": Phase="Pending", Reason="", readiness=false. Elapsed: 3.349772ms
Jul  5 13:28:18.022: INFO: Pod "pod-9063108a-c043-4680-a970-20d7e4cc2761": Phase="Pending", Reason="", readiness=false. Elapsed: 2.180638794s
Jul  5 13:28:20.026: INFO: Pod "pod-9063108a-c043-4680-a970-20d7e4cc2761": Phase="Pending", Reason="", readiness=false. Elapsed: 4.184309753s
Jul  5 13:28:22.031: INFO: Pod "pod-9063108a-c043-4680-a970-20d7e4cc2761": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.18900231s
STEP: Saw pod success
Jul  5 13:28:22.031: INFO: Pod "pod-9063108a-c043-4680-a970-20d7e4cc2761" satisfied condition "success or failure"
Jul  5 13:28:22.034: INFO: Trying to get logs from node iruya-worker2 pod pod-9063108a-c043-4680-a970-20d7e4cc2761 container test-container: 
STEP: delete the pod
Jul  5 13:28:22.080: INFO: Waiting for pod pod-9063108a-c043-4680-a970-20d7e4cc2761 to disappear
Jul  5 13:28:22.084: INFO: Pod pod-9063108a-c043-4680-a970-20d7e4cc2761 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:28:22.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2813" for this suite.
Jul  5 13:28:28.113: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:28:28.227: INFO: namespace emptydir-2813 deletion completed in 6.139771425s

• [SLOW TEST:12.457 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:28:28.227: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating secret secrets-5177/secret-test-69ea05e4-8733-4434-8e02-6eb57a65d023
STEP: Creating a pod to test consume secrets
Jul  5 13:28:28.306: INFO: Waiting up to 5m0s for pod "pod-configmaps-588e9d5a-477a-4e64-9227-0fdec7a1ce2d" in namespace "secrets-5177" to be "success or failure"
Jul  5 13:28:28.405: INFO: Pod "pod-configmaps-588e9d5a-477a-4e64-9227-0fdec7a1ce2d": Phase="Pending", Reason="", readiness=false. Elapsed: 99.484642ms
Jul  5 13:28:30.435: INFO: Pod "pod-configmaps-588e9d5a-477a-4e64-9227-0fdec7a1ce2d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.128993608s
Jul  5 13:28:32.447: INFO: Pod "pod-configmaps-588e9d5a-477a-4e64-9227-0fdec7a1ce2d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.141231414s
STEP: Saw pod success
Jul  5 13:28:32.447: INFO: Pod "pod-configmaps-588e9d5a-477a-4e64-9227-0fdec7a1ce2d" satisfied condition "success or failure"
Jul  5 13:28:32.450: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-588e9d5a-477a-4e64-9227-0fdec7a1ce2d container env-test: 
STEP: delete the pod
Jul  5 13:28:32.498: INFO: Waiting for pod pod-configmaps-588e9d5a-477a-4e64-9227-0fdec7a1ce2d to disappear
Jul  5 13:28:32.532: INFO: Pod pod-configmaps-588e9d5a-477a-4e64-9227-0fdec7a1ce2d no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:28:32.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5177" for this suite.
Jul  5 13:28:38.547: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:28:38.622: INFO: namespace secrets-5177 deletion completed in 6.086385981s

• [SLOW TEST:10.394 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:28:38.622: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test hostPath mode
Jul  5 13:28:38.708: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-4276" to be "success or failure"
Jul  5 13:28:38.759: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 51.002963ms
Jul  5 13:28:40.762: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054679445s
Jul  5 13:28:42.766: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.058834969s
STEP: Saw pod success
Jul  5 13:28:42.766: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Jul  5 13:28:42.770: INFO: Trying to get logs from node iruya-worker2 pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Jul  5 13:28:42.935: INFO: Waiting for pod pod-host-path-test to disappear
Jul  5 13:28:42.972: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:28:42.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-4276" for this suite.
Jul  5 13:28:49.127: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:28:49.203: INFO: namespace hostpath-4276 deletion completed in 6.227363327s

• [SLOW TEST:10.581 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:28:49.203: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jul  5 13:28:49.260: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-6108'
Jul  5 13:28:49.380: INFO: stderr: ""
Jul  5 13:28:49.380: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Jul  5 13:28:54.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-6108 -o json'
Jul  5 13:28:54.544: INFO: stderr: ""
Jul  5 13:28:54.544: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-07-05T13:28:49Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"kubectl-6108\",\n        \"resourceVersion\": \"234917\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-6108/pods/e2e-test-nginx-pod\",\n        \"uid\": \"93071df8-6f97-4c4d-a46e-7218f2a483c3\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-zvwb5\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"iruya-worker\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-zvwb5\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-zvwb5\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-07-05T13:28:49Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-07-05T13:28:52Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-07-05T13:28:52Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-07-05T13:28:49Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"containerd://3011b932d4f580407b2a360931b2978457458c9bab0a4b032f9a73abf20b77ad\",\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-07-05T13:28:51Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"172.17.0.6\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.244.2.140\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-07-05T13:28:49Z\"\n    }\n}\n"
STEP: replace the image in the pod
Jul  5 13:28:54.544: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-6108'
Jul  5 13:28:54.823: INFO: stderr: ""
Jul  5 13:28:54.823: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726
Jul  5 13:28:54.856: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-6108'
Jul  5 13:28:58.305: INFO: stderr: ""
Jul  5 13:28:58.305: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:28:58.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6108" for this suite.
Jul  5 13:29:04.318: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:29:04.403: INFO: namespace kubectl-6108 deletion completed in 6.09531606s

• [SLOW TEST:15.200 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:29:04.404: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Jul  5 13:29:04.454: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jul  5 13:29:04.471: INFO: Waiting for terminating namespaces to be deleted...
Jul  5 13:29:04.473: INFO: 
Logging pods the kubelet thinks is on node iruya-worker before test
Jul  5 13:29:04.479: INFO: kube-proxy-nxrg9 from kube-system started at 2020-07-04 09:21:50 +0000 UTC (1 container statuses recorded)
Jul  5 13:29:04.479: INFO: 	Container kube-proxy ready: true, restart count 0
Jul  5 13:29:04.480: INFO: kindnet-469kb from kube-system started at 2020-07-04 09:21:50 +0000 UTC (1 container statuses recorded)
Jul  5 13:29:04.480: INFO: 	Container kindnet-cni ready: true, restart count 0
Jul  5 13:29:04.480: INFO: 
Logging pods the kubelet thinks is on node iruya-worker2 before test
Jul  5 13:29:04.484: INFO: kube-proxy-wvch7 from kube-system started at 2020-07-04 09:21:50 +0000 UTC (1 container statuses recorded)
Jul  5 13:29:04.484: INFO: 	Container kube-proxy ready: true, restart count 0
Jul  5 13:29:04.484: INFO: kindnet-gj45r from kube-system started at 2020-07-04 09:21:50 +0000 UTC (1 container statuses recorded)
Jul  5 13:29:04.484: INFO: 	Container kindnet-cni ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.161ede4df305eac2], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:29:05.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-4806" for this suite.
Jul  5 13:29:11.531: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:29:11.614: INFO: namespace sched-pred-4806 deletion completed in 6.097592733s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:7.210 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:29:11.614: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Jul  5 13:29:15.705: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-66091ea4-061b-40b5-9b1e-c1c6ed6f1686,GenerateName:,Namespace:events-9179,SelfLink:/api/v1/namespaces/events-9179/pods/send-events-66091ea4-061b-40b5-9b1e-c1c6ed6f1686,UID:63f38928-6bf9-46a5-86dc-de115631462c,ResourceVersion:235007,Generation:0,CreationTimestamp:2020-07-05 13:29:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 677807398,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dhkl5 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dhkl5,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-dhkl5 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc003117ab0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc003117ad0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 13:29:11 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 13:29:14 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 13:29:14 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 13:29:11 +0000 UTC  }],Message:,Reason:,HostIP:172.17.0.7,PodIP:10.244.1.101,StartTime:2020-07-05 13:29:11 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-07-05 13:29:14 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://faacbe9ec981368beb90480805c309eba982d0341596ed9d3c4d092d1810bbcc}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Jul  5 13:29:17.710: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Jul  5 13:29:19.714: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:29:19.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-9179" for this suite.
Jul  5 13:29:57.819: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:29:57.886: INFO: namespace events-9179 deletion completed in 38.083677597s

• [SLOW TEST:46.272 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:29:57.886: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-8845a2b2-d1c8-4e53-8229-a866f758eb0d
STEP: Creating configMap with name cm-test-opt-upd-dbcb7a94-cf31-4779-8e88-bad074152b2d
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-8845a2b2-d1c8-4e53-8229-a866f758eb0d
STEP: Updating configmap cm-test-opt-upd-dbcb7a94-cf31-4779-8e88-bad074152b2d
STEP: Creating configMap with name cm-test-opt-create-68b1f6e5-d7f6-4589-8f4c-16dbb8a3057a
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:30:06.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7777" for this suite.
Jul  5 13:30:28.136: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:30:28.213: INFO: namespace configmap-7777 deletion completed in 22.093248354s

• [SLOW TEST:30.327 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:30:28.213: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's args
Jul  5 13:30:28.299: INFO: Waiting up to 5m0s for pod "var-expansion-1f92466b-8da4-41b3-8337-52d087994b6a" in namespace "var-expansion-6817" to be "success or failure"
Jul  5 13:30:28.302: INFO: Pod "var-expansion-1f92466b-8da4-41b3-8337-52d087994b6a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.977759ms
Jul  5 13:30:30.307: INFO: Pod "var-expansion-1f92466b-8da4-41b3-8337-52d087994b6a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007351852s
Jul  5 13:30:32.312: INFO: Pod "var-expansion-1f92466b-8da4-41b3-8337-52d087994b6a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012257729s
STEP: Saw pod success
Jul  5 13:30:32.312: INFO: Pod "var-expansion-1f92466b-8da4-41b3-8337-52d087994b6a" satisfied condition "success or failure"
Jul  5 13:30:32.315: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-1f92466b-8da4-41b3-8337-52d087994b6a container dapi-container: 
STEP: delete the pod
Jul  5 13:30:32.334: INFO: Waiting for pod var-expansion-1f92466b-8da4-41b3-8337-52d087994b6a to disappear
Jul  5 13:30:32.338: INFO: Pod var-expansion-1f92466b-8da4-41b3-8337-52d087994b6a no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:30:32.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-6817" for this suite.
Jul  5 13:30:38.356: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:30:38.430: INFO: namespace var-expansion-6817 deletion completed in 6.088831168s

• [SLOW TEST:10.216 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:30:38.430: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jul  5 13:30:38.485: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:30:42.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8018" for this suite.
Jul  5 13:31:20.671: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:31:20.750: INFO: namespace pods-8018 deletion completed in 38.09262714s

• [SLOW TEST:42.320 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:31:20.750: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-4481
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jul  5 13:31:20.809: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jul  5 13:31:48.933: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.104:8080/dial?request=hostName&protocol=http&host=10.244.1.103&port=8080&tries=1'] Namespace:pod-network-test-4481 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  5 13:31:48.933: INFO: >>> kubeConfig: /root/.kube/config
I0705 13:31:48.969518       6 log.go:172] (0xc00118e420) (0xc0015506e0) Create stream
I0705 13:31:48.969546       6 log.go:172] (0xc00118e420) (0xc0015506e0) Stream added, broadcasting: 1
I0705 13:31:48.971623       6 log.go:172] (0xc00118e420) Reply frame received for 1
I0705 13:31:48.971663       6 log.go:172] (0xc00118e420) (0xc0027846e0) Create stream
I0705 13:31:48.971678       6 log.go:172] (0xc00118e420) (0xc0027846e0) Stream added, broadcasting: 3
I0705 13:31:48.972715       6 log.go:172] (0xc00118e420) Reply frame received for 3
I0705 13:31:48.972758       6 log.go:172] (0xc00118e420) (0xc001550780) Create stream
I0705 13:31:48.972779       6 log.go:172] (0xc00118e420) (0xc001550780) Stream added, broadcasting: 5
I0705 13:31:48.974359       6 log.go:172] (0xc00118e420) Reply frame received for 5
I0705 13:31:49.069639       6 log.go:172] (0xc00118e420) Data frame received for 3
I0705 13:31:49.069683       6 log.go:172] (0xc0027846e0) (3) Data frame handling
I0705 13:31:49.069713       6 log.go:172] (0xc0027846e0) (3) Data frame sent
I0705 13:31:49.070211       6 log.go:172] (0xc00118e420) Data frame received for 3
I0705 13:31:49.070224       6 log.go:172] (0xc0027846e0) (3) Data frame handling
I0705 13:31:49.070259       6 log.go:172] (0xc00118e420) Data frame received for 5
I0705 13:31:49.070283       6 log.go:172] (0xc001550780) (5) Data frame handling
I0705 13:31:49.071779       6 log.go:172] (0xc00118e420) Data frame received for 1
I0705 13:31:49.071798       6 log.go:172] (0xc0015506e0) (1) Data frame handling
I0705 13:31:49.071815       6 log.go:172] (0xc0015506e0) (1) Data frame sent
I0705 13:31:49.071828       6 log.go:172] (0xc00118e420) (0xc0015506e0) Stream removed, broadcasting: 1
I0705 13:31:49.071894       6 log.go:172] (0xc00118e420) (0xc0015506e0) Stream removed, broadcasting: 1
I0705 13:31:49.071916       6 log.go:172] (0xc00118e420) (0xc0027846e0) Stream removed, broadcasting: 3
I0705 13:31:49.071923       6 log.go:172] (0xc00118e420) (0xc001550780) Stream removed, broadcasting: 5
Jul  5 13:31:49.071: INFO: Waiting for endpoints: map[]
I0705 13:31:49.071981       6 log.go:172] (0xc00118e420) Go away received
Jul  5 13:31:49.075: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.104:8080/dial?request=hostName&protocol=http&host=10.244.2.143&port=8080&tries=1'] Namespace:pod-network-test-4481 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  5 13:31:49.075: INFO: >>> kubeConfig: /root/.kube/config
I0705 13:31:49.108457       6 log.go:172] (0xc0009e8dc0) (0xc002784b40) Create stream
I0705 13:31:49.108495       6 log.go:172] (0xc0009e8dc0) (0xc002784b40) Stream added, broadcasting: 1
I0705 13:31:49.110809       6 log.go:172] (0xc0009e8dc0) Reply frame received for 1
I0705 13:31:49.110852       6 log.go:172] (0xc0009e8dc0) (0xc0005b5900) Create stream
I0705 13:31:49.110867       6 log.go:172] (0xc0009e8dc0) (0xc0005b5900) Stream added, broadcasting: 3
I0705 13:31:49.111747       6 log.go:172] (0xc0009e8dc0) Reply frame received for 3
I0705 13:31:49.111781       6 log.go:172] (0xc0009e8dc0) (0xc002784be0) Create stream
I0705 13:31:49.111792       6 log.go:172] (0xc0009e8dc0) (0xc002784be0) Stream added, broadcasting: 5
I0705 13:31:49.112825       6 log.go:172] (0xc0009e8dc0) Reply frame received for 5
I0705 13:31:49.169901       6 log.go:172] (0xc0009e8dc0) Data frame received for 3
I0705 13:31:49.169941       6 log.go:172] (0xc0005b5900) (3) Data frame handling
I0705 13:31:49.169982       6 log.go:172] (0xc0005b5900) (3) Data frame sent
I0705 13:31:49.170893       6 log.go:172] (0xc0009e8dc0) Data frame received for 5
I0705 13:31:49.170925       6 log.go:172] (0xc002784be0) (5) Data frame handling
I0705 13:31:49.170954       6 log.go:172] (0xc0009e8dc0) Data frame received for 3
I0705 13:31:49.170969       6 log.go:172] (0xc0005b5900) (3) Data frame handling
I0705 13:31:49.172338       6 log.go:172] (0xc0009e8dc0) Data frame received for 1
I0705 13:31:49.172380       6 log.go:172] (0xc002784b40) (1) Data frame handling
I0705 13:31:49.172433       6 log.go:172] (0xc002784b40) (1) Data frame sent
I0705 13:31:49.172458       6 log.go:172] (0xc0009e8dc0) (0xc002784b40) Stream removed, broadcasting: 1
I0705 13:31:49.172483       6 log.go:172] (0xc0009e8dc0) Go away received
I0705 13:31:49.172593       6 log.go:172] (0xc0009e8dc0) (0xc002784b40) Stream removed, broadcasting: 1
I0705 13:31:49.172616       6 log.go:172] (0xc0009e8dc0) (0xc0005b5900) Stream removed, broadcasting: 3
I0705 13:31:49.172626       6 log.go:172] (0xc0009e8dc0) (0xc002784be0) Stream removed, broadcasting: 5
Jul  5 13:31:49.172: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:31:49.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-4481" for this suite.
Jul  5 13:32:11.204: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:32:11.278: INFO: namespace pod-network-test-4481 deletion completed in 22.101348421s

• [SLOW TEST:50.527 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:32:11.278: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-2308e9e2-a371-429b-aec1-28f608d43125
STEP: Creating a pod to test consume configMaps
Jul  5 13:32:11.410: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-10eb834a-168a-4dbf-998c-b9b7ad141430" in namespace "projected-6151" to be "success or failure"
Jul  5 13:32:11.485: INFO: Pod "pod-projected-configmaps-10eb834a-168a-4dbf-998c-b9b7ad141430": Phase="Pending", Reason="", readiness=false. Elapsed: 75.882036ms
Jul  5 13:32:13.489: INFO: Pod "pod-projected-configmaps-10eb834a-168a-4dbf-998c-b9b7ad141430": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079959181s
Jul  5 13:32:15.493: INFO: Pod "pod-projected-configmaps-10eb834a-168a-4dbf-998c-b9b7ad141430": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.083478644s
STEP: Saw pod success
Jul  5 13:32:15.493: INFO: Pod "pod-projected-configmaps-10eb834a-168a-4dbf-998c-b9b7ad141430" satisfied condition "success or failure"
Jul  5 13:32:15.496: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-10eb834a-168a-4dbf-998c-b9b7ad141430 container projected-configmap-volume-test: 
STEP: delete the pod
Jul  5 13:32:15.553: INFO: Waiting for pod pod-projected-configmaps-10eb834a-168a-4dbf-998c-b9b7ad141430 to disappear
Jul  5 13:32:15.579: INFO: Pod pod-projected-configmaps-10eb834a-168a-4dbf-998c-b9b7ad141430 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:32:15.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6151" for this suite.
Jul  5 13:32:21.628: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:32:21.707: INFO: namespace projected-6151 deletion completed in 6.101489992s

• [SLOW TEST:10.429 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:32:21.707: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:32:25.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-5651" for this suite.
Jul  5 13:32:32.172: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:32:32.247: INFO: namespace kubelet-test-5651 deletion completed in 6.395116026s

• [SLOW TEST:10.540 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:32:32.247: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:32:36.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-598" for this suite.
Jul  5 13:33:14.375: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:33:14.475: INFO: namespace kubelet-test-598 deletion completed in 38.115935324s

• [SLOW TEST:42.228 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:33:14.476: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0705 13:33:24.567536       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jul  5 13:33:24.567: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:33:24.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-7338" for this suite.
Jul  5 13:33:30.602: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:33:30.681: INFO: namespace gc-7338 deletion completed in 6.110984662s

• [SLOW TEST:16.205 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:33:30.681: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jul  5 13:33:30.747: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b8de8e29-5663-4e15-b4a1-236c9609cdae" in namespace "projected-6209" to be "success or failure"
Jul  5 13:33:30.767: INFO: Pod "downwardapi-volume-b8de8e29-5663-4e15-b4a1-236c9609cdae": Phase="Pending", Reason="", readiness=false. Elapsed: 19.383258ms
Jul  5 13:33:32.770: INFO: Pod "downwardapi-volume-b8de8e29-5663-4e15-b4a1-236c9609cdae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022751463s
Jul  5 13:33:34.775: INFO: Pod "downwardapi-volume-b8de8e29-5663-4e15-b4a1-236c9609cdae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027607423s
STEP: Saw pod success
Jul  5 13:33:34.775: INFO: Pod "downwardapi-volume-b8de8e29-5663-4e15-b4a1-236c9609cdae" satisfied condition "success or failure"
Jul  5 13:33:34.779: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-b8de8e29-5663-4e15-b4a1-236c9609cdae container client-container: 
STEP: delete the pod
Jul  5 13:33:34.811: INFO: Waiting for pod downwardapi-volume-b8de8e29-5663-4e15-b4a1-236c9609cdae to disappear
Jul  5 13:33:34.826: INFO: Pod downwardapi-volume-b8de8e29-5663-4e15-b4a1-236c9609cdae no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:33:34.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6209" for this suite.
Jul  5 13:33:40.842: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:33:40.917: INFO: namespace projected-6209 deletion completed in 6.087203435s

• [SLOW TEST:10.236 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:33:40.917: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jul  5 13:33:45.027: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:33:45.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-288" for this suite.
Jul  5 13:33:51.112: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:33:51.185: INFO: namespace container-runtime-288 deletion completed in 6.110637387s

• [SLOW TEST:10.268 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:33:51.186: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Jul  5 13:33:51.291: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-4383,SelfLink:/api/v1/namespaces/watch-4383/configmaps/e2e-watch-test-label-changed,UID:3f3a1e97-515c-449c-9fa2-4ce06b0bbf3f,ResourceVersion:235889,Generation:0,CreationTimestamp:2020-07-05 13:33:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jul  5 13:33:51.291: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-4383,SelfLink:/api/v1/namespaces/watch-4383/configmaps/e2e-watch-test-label-changed,UID:3f3a1e97-515c-449c-9fa2-4ce06b0bbf3f,ResourceVersion:235890,Generation:0,CreationTimestamp:2020-07-05 13:33:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jul  5 13:33:51.291: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-4383,SelfLink:/api/v1/namespaces/watch-4383/configmaps/e2e-watch-test-label-changed,UID:3f3a1e97-515c-449c-9fa2-4ce06b0bbf3f,ResourceVersion:235891,Generation:0,CreationTimestamp:2020-07-05 13:33:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Jul  5 13:34:01.327: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-4383,SelfLink:/api/v1/namespaces/watch-4383/configmaps/e2e-watch-test-label-changed,UID:3f3a1e97-515c-449c-9fa2-4ce06b0bbf3f,ResourceVersion:235913,Generation:0,CreationTimestamp:2020-07-05 13:33:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jul  5 13:34:01.327: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-4383,SelfLink:/api/v1/namespaces/watch-4383/configmaps/e2e-watch-test-label-changed,UID:3f3a1e97-515c-449c-9fa2-4ce06b0bbf3f,ResourceVersion:235914,Generation:0,CreationTimestamp:2020-07-05 13:33:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Jul  5 13:34:01.327: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-4383,SelfLink:/api/v1/namespaces/watch-4383/configmaps/e2e-watch-test-label-changed,UID:3f3a1e97-515c-449c-9fa2-4ce06b0bbf3f,ResourceVersion:235915,Generation:0,CreationTimestamp:2020-07-05 13:33:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:34:01.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-4383" for this suite.
Jul  5 13:34:07.363: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:34:07.446: INFO: namespace watch-4383 deletion completed in 6.115392787s

• [SLOW TEST:16.261 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:34:07.447: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:35:07.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-97" for this suite.
Jul  5 13:35:29.574: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:35:29.656: INFO: namespace container-probe-97 deletion completed in 22.101599495s

• [SLOW TEST:82.210 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:35:29.657: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-35602f1b-1176-4c7f-867a-f4249147687a
STEP: Creating a pod to test consume configMaps
Jul  5 13:35:29.765: INFO: Waiting up to 5m0s for pod "pod-configmaps-fad5f974-a627-4b98-a49d-faf3701625c6" in namespace "configmap-7381" to be "success or failure"
Jul  5 13:35:29.786: INFO: Pod "pod-configmaps-fad5f974-a627-4b98-a49d-faf3701625c6": Phase="Pending", Reason="", readiness=false. Elapsed: 21.442937ms
Jul  5 13:35:31.791: INFO: Pod "pod-configmaps-fad5f974-a627-4b98-a49d-faf3701625c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026013183s
Jul  5 13:35:33.795: INFO: Pod "pod-configmaps-fad5f974-a627-4b98-a49d-faf3701625c6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030286278s
STEP: Saw pod success
Jul  5 13:35:33.795: INFO: Pod "pod-configmaps-fad5f974-a627-4b98-a49d-faf3701625c6" satisfied condition "success or failure"
Jul  5 13:35:33.798: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-fad5f974-a627-4b98-a49d-faf3701625c6 container configmap-volume-test: 
STEP: delete the pod
Jul  5 13:35:33.829: INFO: Waiting for pod pod-configmaps-fad5f974-a627-4b98-a49d-faf3701625c6 to disappear
Jul  5 13:35:33.834: INFO: Pod pod-configmaps-fad5f974-a627-4b98-a49d-faf3701625c6 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:35:33.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7381" for this suite.
Jul  5 13:35:39.850: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:35:39.927: INFO: namespace configmap-7381 deletion completed in 6.08801105s

• [SLOW TEST:10.270 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:35:39.927: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jul  5 13:35:39.980: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:35:44.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8397" for this suite.
Jul  5 13:36:26.217: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:36:26.290: INFO: namespace pods-8397 deletion completed in 42.187975281s

• [SLOW TEST:46.363 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:36:26.291: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-1261ee65-0afc-48a7-bd54-a7936945d9fa
STEP: Creating a pod to test consume configMaps
Jul  5 13:36:26.375: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-400b47c4-2a1a-4349-874c-a7dfabee039f" in namespace "projected-6705" to be "success or failure"
Jul  5 13:36:26.387: INFO: Pod "pod-projected-configmaps-400b47c4-2a1a-4349-874c-a7dfabee039f": Phase="Pending", Reason="", readiness=false. Elapsed: 11.217751ms
Jul  5 13:36:28.399: INFO: Pod "pod-projected-configmaps-400b47c4-2a1a-4349-874c-a7dfabee039f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023231836s
Jul  5 13:36:30.403: INFO: Pod "pod-projected-configmaps-400b47c4-2a1a-4349-874c-a7dfabee039f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027622577s
Jul  5 13:36:32.407: INFO: Pod "pod-projected-configmaps-400b47c4-2a1a-4349-874c-a7dfabee039f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.031973243s
STEP: Saw pod success
Jul  5 13:36:32.407: INFO: Pod "pod-projected-configmaps-400b47c4-2a1a-4349-874c-a7dfabee039f" satisfied condition "success or failure"
Jul  5 13:36:32.411: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-400b47c4-2a1a-4349-874c-a7dfabee039f container projected-configmap-volume-test: 
STEP: delete the pod
Jul  5 13:36:32.430: INFO: Waiting for pod pod-projected-configmaps-400b47c4-2a1a-4349-874c-a7dfabee039f to disappear
Jul  5 13:36:32.446: INFO: Pod pod-projected-configmaps-400b47c4-2a1a-4349-874c-a7dfabee039f no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:36:32.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6705" for this suite.
Jul  5 13:36:38.462: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:36:38.555: INFO: namespace projected-6705 deletion completed in 6.106330187s

• [SLOW TEST:12.264 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:36:38.555: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7880.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7880.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jul  5 13:36:44.691: INFO: DNS probes using dns-7880/dns-test-d8750419-47d7-48c4-abab-1bd058ad38ea succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:36:44.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-7880" for this suite.
Jul  5 13:36:50.762: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:36:50.838: INFO: namespace dns-7880 deletion completed in 6.104656738s

• [SLOW TEST:12.283 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:36:50.838: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jul  5 13:36:50.909: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c00a7702-25f2-4fe2-8228-95eeb2ad427d" in namespace "downward-api-6464" to be "success or failure"
Jul  5 13:36:50.924: INFO: Pod "downwardapi-volume-c00a7702-25f2-4fe2-8228-95eeb2ad427d": Phase="Pending", Reason="", readiness=false. Elapsed: 14.54738ms
Jul  5 13:36:52.928: INFO: Pod "downwardapi-volume-c00a7702-25f2-4fe2-8228-95eeb2ad427d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018641565s
Jul  5 13:36:54.932: INFO: Pod "downwardapi-volume-c00a7702-25f2-4fe2-8228-95eeb2ad427d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022431203s
STEP: Saw pod success
Jul  5 13:36:54.932: INFO: Pod "downwardapi-volume-c00a7702-25f2-4fe2-8228-95eeb2ad427d" satisfied condition "success or failure"
Jul  5 13:36:54.934: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-c00a7702-25f2-4fe2-8228-95eeb2ad427d container client-container: 
STEP: delete the pod
Jul  5 13:36:54.969: INFO: Waiting for pod downwardapi-volume-c00a7702-25f2-4fe2-8228-95eeb2ad427d to disappear
Jul  5 13:36:54.974: INFO: Pod downwardapi-volume-c00a7702-25f2-4fe2-8228-95eeb2ad427d no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:36:54.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6464" for this suite.
Jul  5 13:37:00.989: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:37:01.070: INFO: namespace downward-api-6464 deletion completed in 6.092783666s

• [SLOW TEST:10.232 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:37:01.071: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-ecadb42a-b2c0-4ac2-b4ff-750cfee0f6c6
STEP: Creating a pod to test consume secrets
Jul  5 13:37:01.164: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d8c93380-5179-49d3-b068-671e85aeed83" in namespace "projected-2129" to be "success or failure"
Jul  5 13:37:01.167: INFO: Pod "pod-projected-secrets-d8c93380-5179-49d3-b068-671e85aeed83": Phase="Pending", Reason="", readiness=false. Elapsed: 2.809243ms
Jul  5 13:37:03.188: INFO: Pod "pod-projected-secrets-d8c93380-5179-49d3-b068-671e85aeed83": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023567888s
Jul  5 13:37:05.192: INFO: Pod "pod-projected-secrets-d8c93380-5179-49d3-b068-671e85aeed83": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028321699s
STEP: Saw pod success
Jul  5 13:37:05.192: INFO: Pod "pod-projected-secrets-d8c93380-5179-49d3-b068-671e85aeed83" satisfied condition "success or failure"
Jul  5 13:37:05.195: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-d8c93380-5179-49d3-b068-671e85aeed83 container projected-secret-volume-test: 
STEP: delete the pod
Jul  5 13:37:05.280: INFO: Waiting for pod pod-projected-secrets-d8c93380-5179-49d3-b068-671e85aeed83 to disappear
Jul  5 13:37:05.285: INFO: Pod pod-projected-secrets-d8c93380-5179-49d3-b068-671e85aeed83 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:37:05.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2129" for this suite.
Jul  5 13:37:11.301: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:37:11.379: INFO: namespace projected-2129 deletion completed in 6.089812438s

• [SLOW TEST:10.308 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:37:11.380: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292
STEP: creating an rc
Jul  5 13:37:11.430: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1037'
Jul  5 13:37:14.212: INFO: stderr: ""
Jul  5 13:37:14.212: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Waiting for Redis master to start.
Jul  5 13:37:15.278: INFO: Selector matched 1 pods for map[app:redis]
Jul  5 13:37:15.278: INFO: Found 0 / 1
Jul  5 13:37:16.374: INFO: Selector matched 1 pods for map[app:redis]
Jul  5 13:37:16.374: INFO: Found 0 / 1
Jul  5 13:37:17.216: INFO: Selector matched 1 pods for map[app:redis]
Jul  5 13:37:17.216: INFO: Found 0 / 1
Jul  5 13:37:18.217: INFO: Selector matched 1 pods for map[app:redis]
Jul  5 13:37:18.217: INFO: Found 0 / 1
Jul  5 13:37:19.217: INFO: Selector matched 1 pods for map[app:redis]
Jul  5 13:37:19.217: INFO: Found 1 / 1
Jul  5 13:37:19.217: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jul  5 13:37:19.220: INFO: Selector matched 1 pods for map[app:redis]
Jul  5 13:37:19.221: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Jul  5 13:37:19.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-db6bw redis-master --namespace=kubectl-1037'
Jul  5 13:37:19.335: INFO: stderr: ""
Jul  5 13:37:19.335: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 05 Jul 13:37:17.372 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 05 Jul 13:37:17.372 # Server started, Redis version 3.2.12\n1:M 05 Jul 13:37:17.372 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 05 Jul 13:37:17.372 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Jul  5 13:37:19.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-db6bw redis-master --namespace=kubectl-1037 --tail=1'
Jul  5 13:37:19.442: INFO: stderr: ""
Jul  5 13:37:19.442: INFO: stdout: "1:M 05 Jul 13:37:17.372 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Jul  5 13:37:19.442: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-db6bw redis-master --namespace=kubectl-1037 --limit-bytes=1'
Jul  5 13:37:19.549: INFO: stderr: ""
Jul  5 13:37:19.549: INFO: stdout: " "
STEP: exposing timestamps
Jul  5 13:37:19.549: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-db6bw redis-master --namespace=kubectl-1037 --tail=1 --timestamps'
Jul  5 13:37:19.644: INFO: stderr: ""
Jul  5 13:37:19.644: INFO: stdout: "2020-07-05T13:37:17.372970414Z 1:M 05 Jul 13:37:17.372 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Jul  5 13:37:22.144: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-db6bw redis-master --namespace=kubectl-1037 --since=1s'
Jul  5 13:37:22.246: INFO: stderr: ""
Jul  5 13:37:22.246: INFO: stdout: ""
Jul  5 13:37:22.246: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-db6bw redis-master --namespace=kubectl-1037 --since=24h'
Jul  5 13:37:22.351: INFO: stderr: ""
Jul  5 13:37:22.351: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 05 Jul 13:37:17.372 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 05 Jul 13:37:17.372 # Server started, Redis version 3.2.12\n1:M 05 Jul 13:37:17.372 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 05 Jul 13:37:17.372 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
STEP: using delete to clean up resources
Jul  5 13:37:22.351: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1037'
Jul  5 13:37:22.463: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul  5 13:37:22.463: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Jul  5 13:37:22.463: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-1037'
Jul  5 13:37:22.566: INFO: stderr: "No resources found.\n"
Jul  5 13:37:22.566: INFO: stdout: ""
Jul  5 13:37:22.566: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-1037 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jul  5 13:37:22.651: INFO: stderr: ""
Jul  5 13:37:22.651: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:37:22.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1037" for this suite.
Jul  5 13:37:44.707: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:37:44.787: INFO: namespace kubectl-1037 deletion completed in 22.133443004s

• [SLOW TEST:33.407 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:37:44.787: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-3776e9da-ef2b-4177-98a0-df9920a18069
STEP: Creating a pod to test consume configMaps
Jul  5 13:37:44.858: INFO: Waiting up to 5m0s for pod "pod-configmaps-05649249-dd43-411a-8f25-30a784496ef4" in namespace "configmap-6809" to be "success or failure"
Jul  5 13:37:44.862: INFO: Pod "pod-configmaps-05649249-dd43-411a-8f25-30a784496ef4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.835222ms
Jul  5 13:37:46.874: INFO: Pod "pod-configmaps-05649249-dd43-411a-8f25-30a784496ef4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016191202s
Jul  5 13:37:48.878: INFO: Pod "pod-configmaps-05649249-dd43-411a-8f25-30a784496ef4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020521821s
STEP: Saw pod success
Jul  5 13:37:48.878: INFO: Pod "pod-configmaps-05649249-dd43-411a-8f25-30a784496ef4" satisfied condition "success or failure"
Jul  5 13:37:48.882: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-05649249-dd43-411a-8f25-30a784496ef4 container configmap-volume-test: 
STEP: delete the pod
Jul  5 13:37:48.899: INFO: Waiting for pod pod-configmaps-05649249-dd43-411a-8f25-30a784496ef4 to disappear
Jul  5 13:37:48.939: INFO: Pod pod-configmaps-05649249-dd43-411a-8f25-30a784496ef4 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:37:48.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6809" for this suite.
Jul  5 13:37:54.962: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:37:55.043: INFO: namespace configmap-6809 deletion completed in 6.098400588s

• [SLOW TEST:10.256 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:37:55.043: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jul  5 13:38:05.259: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jul  5 13:38:05.267: INFO: Pod pod-with-prestop-http-hook still exists
Jul  5 13:38:07.267: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jul  5 13:38:07.271: INFO: Pod pod-with-prestop-http-hook still exists
Jul  5 13:38:09.267: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jul  5 13:38:09.273: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:38:09.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-2638" for this suite.
Jul  5 13:38:33.294: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:38:33.364: INFO: namespace container-lifecycle-hook-2638 deletion completed in 24.082372804s

• [SLOW TEST:38.321 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:38:33.365: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jul  5 13:38:33.500: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2150ed9f-ccd9-4afa-a0d7-a2b8a5eadde7" in namespace "downward-api-7805" to be "success or failure"
Jul  5 13:38:33.503: INFO: Pod "downwardapi-volume-2150ed9f-ccd9-4afa-a0d7-a2b8a5eadde7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.190465ms
Jul  5 13:38:35.507: INFO: Pod "downwardapi-volume-2150ed9f-ccd9-4afa-a0d7-a2b8a5eadde7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007065572s
Jul  5 13:38:37.511: INFO: Pod "downwardapi-volume-2150ed9f-ccd9-4afa-a0d7-a2b8a5eadde7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011672925s
STEP: Saw pod success
Jul  5 13:38:37.511: INFO: Pod "downwardapi-volume-2150ed9f-ccd9-4afa-a0d7-a2b8a5eadde7" satisfied condition "success or failure"
Jul  5 13:38:37.515: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-2150ed9f-ccd9-4afa-a0d7-a2b8a5eadde7 container client-container: 
STEP: delete the pod
Jul  5 13:38:37.534: INFO: Waiting for pod downwardapi-volume-2150ed9f-ccd9-4afa-a0d7-a2b8a5eadde7 to disappear
Jul  5 13:38:37.565: INFO: Pod downwardapi-volume-2150ed9f-ccd9-4afa-a0d7-a2b8a5eadde7 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:38:37.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7805" for this suite.
Jul  5 13:38:43.579: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:38:43.659: INFO: namespace downward-api-7805 deletion completed in 6.090512095s

• [SLOW TEST:10.295 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:38:43.660: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jul  5 13:38:43.730: INFO: Waiting up to 5m0s for pod "pod-1a24306c-7865-4e07-b593-c3e90ab9637a" in namespace "emptydir-279" to be "success or failure"
Jul  5 13:38:43.749: INFO: Pod "pod-1a24306c-7865-4e07-b593-c3e90ab9637a": Phase="Pending", Reason="", readiness=false. Elapsed: 19.108243ms
Jul  5 13:38:45.753: INFO: Pod "pod-1a24306c-7865-4e07-b593-c3e90ab9637a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022655323s
Jul  5 13:38:47.757: INFO: Pod "pod-1a24306c-7865-4e07-b593-c3e90ab9637a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026821709s
STEP: Saw pod success
Jul  5 13:38:47.757: INFO: Pod "pod-1a24306c-7865-4e07-b593-c3e90ab9637a" satisfied condition "success or failure"
Jul  5 13:38:47.760: INFO: Trying to get logs from node iruya-worker pod pod-1a24306c-7865-4e07-b593-c3e90ab9637a container test-container: 
STEP: delete the pod
Jul  5 13:38:47.841: INFO: Waiting for pod pod-1a24306c-7865-4e07-b593-c3e90ab9637a to disappear
Jul  5 13:38:47.850: INFO: Pod pod-1a24306c-7865-4e07-b593-c3e90ab9637a no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:38:47.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-279" for this suite.
Jul  5 13:38:53.918: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:38:54.005: INFO: namespace emptydir-279 deletion completed in 6.151847441s

• [SLOW TEST:10.346 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:38:54.006: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Jul  5 13:38:54.097: INFO: Waiting up to 5m0s for pod "pod-fc9f412e-43d4-405c-adca-c6cac270daef" in namespace "emptydir-4895" to be "success or failure"
Jul  5 13:38:54.100: INFO: Pod "pod-fc9f412e-43d4-405c-adca-c6cac270daef": Phase="Pending", Reason="", readiness=false. Elapsed: 3.08601ms
Jul  5 13:38:56.104: INFO: Pod "pod-fc9f412e-43d4-405c-adca-c6cac270daef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007411871s
Jul  5 13:38:58.124: INFO: Pod "pod-fc9f412e-43d4-405c-adca-c6cac270daef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027222326s
STEP: Saw pod success
Jul  5 13:38:58.124: INFO: Pod "pod-fc9f412e-43d4-405c-adca-c6cac270daef" satisfied condition "success or failure"
Jul  5 13:38:58.126: INFO: Trying to get logs from node iruya-worker2 pod pod-fc9f412e-43d4-405c-adca-c6cac270daef container test-container: 
STEP: delete the pod
Jul  5 13:38:58.184: INFO: Waiting for pod pod-fc9f412e-43d4-405c-adca-c6cac270daef to disappear
Jul  5 13:38:58.190: INFO: Pod pod-fc9f412e-43d4-405c-adca-c6cac270daef no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:38:58.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4895" for this suite.
Jul  5 13:39:04.206: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:39:04.276: INFO: namespace emptydir-4895 deletion completed in 6.082098667s

• [SLOW TEST:10.271 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:39:04.277: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod busybox-de2ffbea-71f2-4f9b-aa58-ba680d06b6dc in namespace container-probe-4299
Jul  5 13:39:08.416: INFO: Started pod busybox-de2ffbea-71f2-4f9b-aa58-ba680d06b6dc in namespace container-probe-4299
STEP: checking the pod's current state and verifying that restartCount is present
Jul  5 13:39:08.419: INFO: Initial restart count of pod busybox-de2ffbea-71f2-4f9b-aa58-ba680d06b6dc is 0
Jul  5 13:39:56.522: INFO: Restart count of pod container-probe-4299/busybox-de2ffbea-71f2-4f9b-aa58-ba680d06b6dc is now 1 (48.103504671s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:39:56.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-4299" for this suite.
Jul  5 13:40:02.668: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:40:02.745: INFO: namespace container-probe-4299 deletion completed in 6.13329879s

• [SLOW TEST:58.469 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:40:02.746: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jul  5 13:40:02.802: INFO: Waiting up to 5m0s for pod "pod-fedec475-1101-4277-9da5-1dc0028cb0f7" in namespace "emptydir-9133" to be "success or failure"
Jul  5 13:40:02.814: INFO: Pod "pod-fedec475-1101-4277-9da5-1dc0028cb0f7": Phase="Pending", Reason="", readiness=false. Elapsed: 12.778616ms
Jul  5 13:40:04.819: INFO: Pod "pod-fedec475-1101-4277-9da5-1dc0028cb0f7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016956133s
Jul  5 13:40:06.823: INFO: Pod "pod-fedec475-1101-4277-9da5-1dc0028cb0f7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021066186s
STEP: Saw pod success
Jul  5 13:40:06.823: INFO: Pod "pod-fedec475-1101-4277-9da5-1dc0028cb0f7" satisfied condition "success or failure"
Jul  5 13:40:06.826: INFO: Trying to get logs from node iruya-worker2 pod pod-fedec475-1101-4277-9da5-1dc0028cb0f7 container test-container: 
STEP: delete the pod
Jul  5 13:40:06.840: INFO: Waiting for pod pod-fedec475-1101-4277-9da5-1dc0028cb0f7 to disappear
Jul  5 13:40:06.859: INFO: Pod pod-fedec475-1101-4277-9da5-1dc0028cb0f7 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:40:06.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9133" for this suite.
Jul  5 13:40:12.892: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:40:12.968: INFO: namespace emptydir-9133 deletion completed in 6.105234087s

• [SLOW TEST:10.222 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:40:12.969: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-46e88e77-5719-4ebd-9245-c1394e5ff289
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:40:17.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7344" for this suite.
Jul  5 13:40:39.167: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:40:39.247: INFO: namespace configmap-7344 deletion completed in 22.121468044s

• [SLOW TEST:26.278 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period 
  should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:40:39.247: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47
[It] should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up selector
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
Jul  5 13:40:43.414: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0'
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Jul  5 13:40:58.529: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:40:58.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5046" for this suite.
Jul  5 13:41:04.591: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:41:04.667: INFO: namespace pods-5046 deletion completed in 6.11243402s

• [SLOW TEST:25.420 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:41:04.668: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:41:30.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-2380" for this suite.
Jul  5 13:41:37.004: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:41:37.078: INFO: namespace namespaces-2380 deletion completed in 6.085434608s
STEP: Destroying namespace "nsdeletetest-6143" for this suite.
Jul  5 13:41:37.081: INFO: Namespace nsdeletetest-6143 was already deleted
STEP: Destroying namespace "nsdeletetest-4983" for this suite.
Jul  5 13:41:43.101: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:41:43.169: INFO: namespace nsdeletetest-4983 deletion completed in 6.087777938s

• [SLOW TEST:38.501 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:41:43.169: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-26de72d0-f03d-4dd8-8eea-fcc33868c68e
STEP: Creating a pod to test consume secrets
Jul  5 13:41:43.248: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-359a5ea4-d91e-459b-9ed8-cea75a112f75" in namespace "projected-6045" to be "success or failure"
Jul  5 13:41:43.297: INFO: Pod "pod-projected-secrets-359a5ea4-d91e-459b-9ed8-cea75a112f75": Phase="Pending", Reason="", readiness=false. Elapsed: 49.561864ms
Jul  5 13:41:45.328: INFO: Pod "pod-projected-secrets-359a5ea4-d91e-459b-9ed8-cea75a112f75": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079647392s
Jul  5 13:41:47.424: INFO: Pod "pod-projected-secrets-359a5ea4-d91e-459b-9ed8-cea75a112f75": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.175728583s
STEP: Saw pod success
Jul  5 13:41:47.424: INFO: Pod "pod-projected-secrets-359a5ea4-d91e-459b-9ed8-cea75a112f75" satisfied condition "success or failure"
Jul  5 13:41:47.426: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-359a5ea4-d91e-459b-9ed8-cea75a112f75 container projected-secret-volume-test: 
STEP: delete the pod
Jul  5 13:41:47.503: INFO: Waiting for pod pod-projected-secrets-359a5ea4-d91e-459b-9ed8-cea75a112f75 to disappear
Jul  5 13:41:47.561: INFO: Pod pod-projected-secrets-359a5ea4-d91e-459b-9ed8-cea75a112f75 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:41:47.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6045" for this suite.
Jul  5 13:41:53.579: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:41:53.669: INFO: namespace projected-6045 deletion completed in 6.104370438s

• [SLOW TEST:10.500 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:41:53.670: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jul  5 13:42:15.816: INFO: Container started at 2020-07-05 13:41:56 +0000 UTC, pod became ready at 2020-07-05 13:42:15 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:42:15.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-912" for this suite.
Jul  5 13:42:37.850: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:42:37.927: INFO: namespace container-probe-912 deletion completed in 22.107528039s

• [SLOW TEST:44.257 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:42:37.927: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jul  5 13:42:38.015: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c11c9bd2-94d2-4e8b-90ed-76c842075052" in namespace "projected-5228" to be "success or failure"
Jul  5 13:42:38.031: INFO: Pod "downwardapi-volume-c11c9bd2-94d2-4e8b-90ed-76c842075052": Phase="Pending", Reason="", readiness=false. Elapsed: 15.89702ms
Jul  5 13:42:40.036: INFO: Pod "downwardapi-volume-c11c9bd2-94d2-4e8b-90ed-76c842075052": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020513143s
Jul  5 13:42:42.041: INFO: Pod "downwardapi-volume-c11c9bd2-94d2-4e8b-90ed-76c842075052": Phase="Running", Reason="", readiness=true. Elapsed: 4.026070427s
Jul  5 13:42:44.047: INFO: Pod "downwardapi-volume-c11c9bd2-94d2-4e8b-90ed-76c842075052": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.031309228s
STEP: Saw pod success
Jul  5 13:42:44.047: INFO: Pod "downwardapi-volume-c11c9bd2-94d2-4e8b-90ed-76c842075052" satisfied condition "success or failure"
Jul  5 13:42:44.050: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-c11c9bd2-94d2-4e8b-90ed-76c842075052 container client-container: 
STEP: delete the pod
Jul  5 13:42:44.069: INFO: Waiting for pod downwardapi-volume-c11c9bd2-94d2-4e8b-90ed-76c842075052 to disappear
Jul  5 13:42:44.073: INFO: Pod downwardapi-volume-c11c9bd2-94d2-4e8b-90ed-76c842075052 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:42:44.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5228" for this suite.
Jul  5 13:42:52.090: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:42:52.169: INFO: namespace projected-5228 deletion completed in 8.09354973s

• [SLOW TEST:14.242 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:42:52.169: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-5739a4ad-0912-448e-972b-14330690962e
STEP: Creating a pod to test consume configMaps
Jul  5 13:42:52.237: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3822001c-7364-4ff6-bda6-bbea6dbe3f1f" in namespace "projected-4422" to be "success or failure"
Jul  5 13:42:52.292: INFO: Pod "pod-projected-configmaps-3822001c-7364-4ff6-bda6-bbea6dbe3f1f": Phase="Pending", Reason="", readiness=false. Elapsed: 55.20341ms
Jul  5 13:42:54.296: INFO: Pod "pod-projected-configmaps-3822001c-7364-4ff6-bda6-bbea6dbe3f1f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05878698s
Jul  5 13:42:56.300: INFO: Pod "pod-projected-configmaps-3822001c-7364-4ff6-bda6-bbea6dbe3f1f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.063292459s
STEP: Saw pod success
Jul  5 13:42:56.300: INFO: Pod "pod-projected-configmaps-3822001c-7364-4ff6-bda6-bbea6dbe3f1f" satisfied condition "success or failure"
Jul  5 13:42:56.304: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-3822001c-7364-4ff6-bda6-bbea6dbe3f1f container projected-configmap-volume-test: 
STEP: delete the pod
Jul  5 13:42:56.455: INFO: Waiting for pod pod-projected-configmaps-3822001c-7364-4ff6-bda6-bbea6dbe3f1f to disappear
Jul  5 13:42:56.457: INFO: Pod pod-projected-configmaps-3822001c-7364-4ff6-bda6-bbea6dbe3f1f no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:42:56.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4422" for this suite.
Jul  5 13:43:02.475: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:43:02.547: INFO: namespace projected-4422 deletion completed in 6.086280869s

• [SLOW TEST:10.377 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:43:02.547: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating server pod server in namespace prestop-7093
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-7093
STEP: Deleting pre-stop pod
Jul  5 13:43:15.675: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:43:15.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-7093" for this suite.
Jul  5 13:43:57.735: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:43:57.814: INFO: namespace prestop-7093 deletion completed in 42.107637836s

• [SLOW TEST:55.267 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:43:57.815: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Jul  5 13:43:57.860: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-823'
Jul  5 13:43:58.144: INFO: stderr: ""
Jul  5 13:43:58.144: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jul  5 13:43:58.144: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-823'
Jul  5 13:43:58.252: INFO: stderr: ""
Jul  5 13:43:58.252: INFO: stdout: "update-demo-nautilus-mljj9 update-demo-nautilus-sctmm "
Jul  5 13:43:58.252: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mljj9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-823'
Jul  5 13:43:58.383: INFO: stderr: ""
Jul  5 13:43:58.383: INFO: stdout: ""
Jul  5 13:43:58.383: INFO: update-demo-nautilus-mljj9 is created but not running
Jul  5 13:44:03.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-823'
Jul  5 13:44:03.479: INFO: stderr: ""
Jul  5 13:44:03.479: INFO: stdout: "update-demo-nautilus-mljj9 update-demo-nautilus-sctmm "
Jul  5 13:44:03.479: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mljj9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-823'
Jul  5 13:44:03.586: INFO: stderr: ""
Jul  5 13:44:03.586: INFO: stdout: ""
Jul  5 13:44:03.586: INFO: update-demo-nautilus-mljj9 is created but not running
Jul  5 13:44:08.586: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-823'
Jul  5 13:44:08.682: INFO: stderr: ""
Jul  5 13:44:08.682: INFO: stdout: "update-demo-nautilus-mljj9 update-demo-nautilus-sctmm "
Jul  5 13:44:08.682: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mljj9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-823'
Jul  5 13:44:08.779: INFO: stderr: ""
Jul  5 13:44:08.779: INFO: stdout: "true"
Jul  5 13:44:08.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mljj9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-823'
Jul  5 13:44:08.876: INFO: stderr: ""
Jul  5 13:44:08.876: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul  5 13:44:08.876: INFO: validating pod update-demo-nautilus-mljj9
Jul  5 13:44:08.880: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul  5 13:44:08.880: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul  5 13:44:08.880: INFO: update-demo-nautilus-mljj9 is verified up and running
Jul  5 13:44:08.880: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sctmm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-823'
Jul  5 13:44:08.986: INFO: stderr: ""
Jul  5 13:44:08.986: INFO: stdout: "true"
Jul  5 13:44:08.986: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-sctmm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-823'
Jul  5 13:44:09.091: INFO: stderr: ""
Jul  5 13:44:09.091: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul  5 13:44:09.091: INFO: validating pod update-demo-nautilus-sctmm
Jul  5 13:44:09.095: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul  5 13:44:09.095: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul  5 13:44:09.096: INFO: update-demo-nautilus-sctmm is verified up and running
STEP: using delete to clean up resources
Jul  5 13:44:09.096: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-823'
Jul  5 13:44:09.191: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul  5 13:44:09.191: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jul  5 13:44:09.191: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-823'
Jul  5 13:44:09.309: INFO: stderr: "No resources found.\n"
Jul  5 13:44:09.309: INFO: stdout: ""
Jul  5 13:44:09.309: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-823 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jul  5 13:44:09.471: INFO: stderr: ""
Jul  5 13:44:09.471: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:44:09.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-823" for this suite.
Jul  5 13:44:15.974: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:44:16.048: INFO: namespace kubectl-823 deletion completed in 6.502354165s

• [SLOW TEST:18.233 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:44:16.049: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating api versions
Jul  5 13:44:16.169: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Jul  5 13:44:16.341: INFO: stderr: ""
Jul  5 13:44:16.341: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:44:16.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9946" for this suite.
Jul  5 13:44:22.360: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:44:22.441: INFO: namespace kubectl-9946 deletion completed in 6.094671773s

• [SLOW TEST:6.392 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl api-versions
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if v1 is in available api versions  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:44:22.441: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
Jul  5 13:44:22.471: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Registering the sample API server.
Jul  5 13:44:23.171: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Jul  5 13:44:25.616: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729553463, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729553463, loc:(*time.Location)(0x7eb18c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729553463, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729553463, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  5 13:44:27.620: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729553463, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729553463, loc:(*time.Location)(0x7eb18c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729553463, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729553463, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  5 13:44:30.257: INFO: Waited 628.228767ms for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:44:30.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-1886" for this suite.
Jul  5 13:44:36.992: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:44:37.075: INFO: namespace aggregator-1886 deletion completed in 6.259091684s

• [SLOW TEST:14.634 seconds]
[sig-api-machinery] Aggregator
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:44:37.075: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-fb3ee58a-84c7-4e19-b2c1-c0c0de16a446 in namespace container-probe-1397
Jul  5 13:44:41.189: INFO: Started pod liveness-fb3ee58a-84c7-4e19-b2c1-c0c0de16a446 in namespace container-probe-1397
STEP: checking the pod's current state and verifying that restartCount is present
Jul  5 13:44:41.192: INFO: Initial restart count of pod liveness-fb3ee58a-84c7-4e19-b2c1-c0c0de16a446 is 0
Jul  5 13:44:55.223: INFO: Restart count of pod container-probe-1397/liveness-fb3ee58a-84c7-4e19-b2c1-c0c0de16a446 is now 1 (14.030815911s elapsed)
Jul  5 13:45:15.265: INFO: Restart count of pod container-probe-1397/liveness-fb3ee58a-84c7-4e19-b2c1-c0c0de16a446 is now 2 (34.072955633s elapsed)
Jul  5 13:45:35.311: INFO: Restart count of pod container-probe-1397/liveness-fb3ee58a-84c7-4e19-b2c1-c0c0de16a446 is now 3 (54.119311985s elapsed)
Jul  5 13:45:55.353: INFO: Restart count of pod container-probe-1397/liveness-fb3ee58a-84c7-4e19-b2c1-c0c0de16a446 is now 4 (1m14.161083493s elapsed)
Jul  5 13:46:57.536: INFO: Restart count of pod container-probe-1397/liveness-fb3ee58a-84c7-4e19-b2c1-c0c0de16a446 is now 5 (2m16.343974808s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:46:57.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1397" for this suite.
Jul  5 13:47:03.599: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:47:03.760: INFO: namespace container-probe-1397 deletion completed in 6.202668624s

• [SLOW TEST:146.685 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:47:03.761: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-0622e092-c970-45e3-802e-705e59dcda81
STEP: Creating a pod to test consume secrets
Jul  5 13:47:03.876: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2c0a3960-b3f6-44aa-ae4a-8242142eef5e" in namespace "projected-8411" to be "success or failure"
Jul  5 13:47:03.900: INFO: Pod "pod-projected-secrets-2c0a3960-b3f6-44aa-ae4a-8242142eef5e": Phase="Pending", Reason="", readiness=false. Elapsed: 24.592899ms
Jul  5 13:47:05.904: INFO: Pod "pod-projected-secrets-2c0a3960-b3f6-44aa-ae4a-8242142eef5e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028439251s
Jul  5 13:47:07.908: INFO: Pod "pod-projected-secrets-2c0a3960-b3f6-44aa-ae4a-8242142eef5e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031975892s
STEP: Saw pod success
Jul  5 13:47:07.908: INFO: Pod "pod-projected-secrets-2c0a3960-b3f6-44aa-ae4a-8242142eef5e" satisfied condition "success or failure"
Jul  5 13:47:07.910: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-2c0a3960-b3f6-44aa-ae4a-8242142eef5e container projected-secret-volume-test: 
STEP: delete the pod
Jul  5 13:47:07.979: INFO: Waiting for pod pod-projected-secrets-2c0a3960-b3f6-44aa-ae4a-8242142eef5e to disappear
Jul  5 13:47:08.013: INFO: Pod pod-projected-secrets-2c0a3960-b3f6-44aa-ae4a-8242142eef5e no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:47:08.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8411" for this suite.
Jul  5 13:47:14.034: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:47:14.116: INFO: namespace projected-8411 deletion completed in 6.098627174s

• [SLOW TEST:10.355 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:47:14.116: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Jul  5 13:47:24.235: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4301 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  5 13:47:24.235: INFO: >>> kubeConfig: /root/.kube/config
I0705 13:47:24.266751       6 log.go:172] (0xc000c03080) (0xc0002f3900) Create stream
I0705 13:47:24.266798       6 log.go:172] (0xc000c03080) (0xc0002f3900) Stream added, broadcasting: 1
I0705 13:47:24.268494       6 log.go:172] (0xc000c03080) Reply frame received for 1
I0705 13:47:24.268551       6 log.go:172] (0xc000c03080) (0xc0002f3c20) Create stream
I0705 13:47:24.268566       6 log.go:172] (0xc000c03080) (0xc0002f3c20) Stream added, broadcasting: 3
I0705 13:47:24.269923       6 log.go:172] (0xc000c03080) Reply frame received for 3
I0705 13:47:24.269969       6 log.go:172] (0xc000c03080) (0xc0005b4140) Create stream
I0705 13:47:24.269983       6 log.go:172] (0xc000c03080) (0xc0005b4140) Stream added, broadcasting: 5
I0705 13:47:24.270813       6 log.go:172] (0xc000c03080) Reply frame received for 5
I0705 13:47:24.338983       6 log.go:172] (0xc000c03080) Data frame received for 3
I0705 13:47:24.339014       6 log.go:172] (0xc0002f3c20) (3) Data frame handling
I0705 13:47:24.339032       6 log.go:172] (0xc0002f3c20) (3) Data frame sent
I0705 13:47:24.339041       6 log.go:172] (0xc000c03080) Data frame received for 3
I0705 13:47:24.339057       6 log.go:172] (0xc0002f3c20) (3) Data frame handling
I0705 13:47:24.339099       6 log.go:172] (0xc000c03080) Data frame received for 5
I0705 13:47:24.339160       6 log.go:172] (0xc0005b4140) (5) Data frame handling
I0705 13:47:24.340669       6 log.go:172] (0xc000c03080) Data frame received for 1
I0705 13:47:24.340692       6 log.go:172] (0xc0002f3900) (1) Data frame handling
I0705 13:47:24.340713       6 log.go:172] (0xc0002f3900) (1) Data frame sent
I0705 13:47:24.340730       6 log.go:172] (0xc000c03080) (0xc0002f3900) Stream removed, broadcasting: 1
I0705 13:47:24.340747       6 log.go:172] (0xc000c03080) Go away received
I0705 13:47:24.340953       6 log.go:172] (0xc000c03080) (0xc0002f3900) Stream removed, broadcasting: 1
I0705 13:47:24.340986       6 log.go:172] (0xc000c03080) (0xc0002f3c20) Stream removed, broadcasting: 3
I0705 13:47:24.341011       6 log.go:172] (0xc000c03080) (0xc0005b4140) Stream removed, broadcasting: 5
Jul  5 13:47:24.341: INFO: Exec stderr: ""
Jul  5 13:47:24.341: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4301 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  5 13:47:24.341: INFO: >>> kubeConfig: /root/.kube/config
I0705 13:47:24.368060       6 log.go:172] (0xc001176160) (0xc000f5a320) Create stream
I0705 13:47:24.368090       6 log.go:172] (0xc001176160) (0xc000f5a320) Stream added, broadcasting: 1
I0705 13:47:24.370248       6 log.go:172] (0xc001176160) Reply frame received for 1
I0705 13:47:24.370308       6 log.go:172] (0xc001176160) (0xc0005b4460) Create stream
I0705 13:47:24.370334       6 log.go:172] (0xc001176160) (0xc0005b4460) Stream added, broadcasting: 3
I0705 13:47:24.371293       6 log.go:172] (0xc001176160) Reply frame received for 3
I0705 13:47:24.371335       6 log.go:172] (0xc001176160) (0xc0005b4500) Create stream
I0705 13:47:24.371349       6 log.go:172] (0xc001176160) (0xc0005b4500) Stream added, broadcasting: 5
I0705 13:47:24.372241       6 log.go:172] (0xc001176160) Reply frame received for 5
I0705 13:47:24.429756       6 log.go:172] (0xc001176160) Data frame received for 5
I0705 13:47:24.429803       6 log.go:172] (0xc0005b4500) (5) Data frame handling
I0705 13:47:24.429831       6 log.go:172] (0xc001176160) Data frame received for 3
I0705 13:47:24.429853       6 log.go:172] (0xc0005b4460) (3) Data frame handling
I0705 13:47:24.429879       6 log.go:172] (0xc0005b4460) (3) Data frame sent
I0705 13:47:24.429893       6 log.go:172] (0xc001176160) Data frame received for 3
I0705 13:47:24.429904       6 log.go:172] (0xc0005b4460) (3) Data frame handling
I0705 13:47:24.430765       6 log.go:172] (0xc001176160) Data frame received for 1
I0705 13:47:24.430789       6 log.go:172] (0xc000f5a320) (1) Data frame handling
I0705 13:47:24.430822       6 log.go:172] (0xc000f5a320) (1) Data frame sent
I0705 13:47:24.430866       6 log.go:172] (0xc001176160) (0xc000f5a320) Stream removed, broadcasting: 1
I0705 13:47:24.430910       6 log.go:172] (0xc001176160) Go away received
I0705 13:47:24.430982       6 log.go:172] (0xc001176160) (0xc000f5a320) Stream removed, broadcasting: 1
I0705 13:47:24.431012       6 log.go:172] (0xc001176160) (0xc0005b4460) Stream removed, broadcasting: 3
I0705 13:47:24.431040       6 log.go:172] (0xc001176160) (0xc0005b4500) Stream removed, broadcasting: 5
Jul  5 13:47:24.431: INFO: Exec stderr: ""
Jul  5 13:47:24.431: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4301 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  5 13:47:24.431: INFO: >>> kubeConfig: /root/.kube/config
I0705 13:47:24.466416       6 log.go:172] (0xc000539ad0) (0xc000916820) Create stream
I0705 13:47:24.466455       6 log.go:172] (0xc000539ad0) (0xc000916820) Stream added, broadcasting: 1
I0705 13:47:24.468614       6 log.go:172] (0xc000539ad0) Reply frame received for 1
I0705 13:47:24.468639       6 log.go:172] (0xc000539ad0) (0xc0002f3ea0) Create stream
I0705 13:47:24.468647       6 log.go:172] (0xc000539ad0) (0xc0002f3ea0) Stream added, broadcasting: 3
I0705 13:47:24.469504       6 log.go:172] (0xc000539ad0) Reply frame received for 3
I0705 13:47:24.469539       6 log.go:172] (0xc000539ad0) (0xc000fd8320) Create stream
I0705 13:47:24.469551       6 log.go:172] (0xc000539ad0) (0xc000fd8320) Stream added, broadcasting: 5
I0705 13:47:24.470230       6 log.go:172] (0xc000539ad0) Reply frame received for 5
I0705 13:47:24.544482       6 log.go:172] (0xc000539ad0) Data frame received for 5
I0705 13:47:24.544532       6 log.go:172] (0xc000539ad0) Data frame received for 3
I0705 13:47:24.544551       6 log.go:172] (0xc0002f3ea0) (3) Data frame handling
I0705 13:47:24.544562       6 log.go:172] (0xc0002f3ea0) (3) Data frame sent
I0705 13:47:24.544570       6 log.go:172] (0xc000539ad0) Data frame received for 3
I0705 13:47:24.544577       6 log.go:172] (0xc0002f3ea0) (3) Data frame handling
I0705 13:47:24.544596       6 log.go:172] (0xc000fd8320) (5) Data frame handling
I0705 13:47:24.546494       6 log.go:172] (0xc000539ad0) Data frame received for 1
I0705 13:47:24.546516       6 log.go:172] (0xc000916820) (1) Data frame handling
I0705 13:47:24.546525       6 log.go:172] (0xc000916820) (1) Data frame sent
I0705 13:47:24.546534       6 log.go:172] (0xc000539ad0) (0xc000916820) Stream removed, broadcasting: 1
I0705 13:47:24.546591       6 log.go:172] (0xc000539ad0) Go away received
I0705 13:47:24.546696       6 log.go:172] (0xc000539ad0) (0xc000916820) Stream removed, broadcasting: 1
I0705 13:47:24.546741       6 log.go:172] (0xc000539ad0) (0xc0002f3ea0) Stream removed, broadcasting: 3
I0705 13:47:24.546758       6 log.go:172] (0xc000539ad0) (0xc000fd8320) Stream removed, broadcasting: 5
Jul  5 13:47:24.546: INFO: Exec stderr: ""
Jul  5 13:47:24.546: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4301 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  5 13:47:24.546: INFO: >>> kubeConfig: /root/.kube/config
I0705 13:47:24.582901       6 log.go:172] (0xc00196ea50) (0xc000916fa0) Create stream
I0705 13:47:24.582941       6 log.go:172] (0xc00196ea50) (0xc000916fa0) Stream added, broadcasting: 1
I0705 13:47:24.586045       6 log.go:172] (0xc00196ea50) Reply frame received for 1
I0705 13:47:24.586095       6 log.go:172] (0xc00196ea50) (0xc000917040) Create stream
I0705 13:47:24.586107       6 log.go:172] (0xc00196ea50) (0xc000917040) Stream added, broadcasting: 3
I0705 13:47:24.587034       6 log.go:172] (0xc00196ea50) Reply frame received for 3
I0705 13:47:24.587084       6 log.go:172] (0xc00196ea50) (0xc0009170e0) Create stream
I0705 13:47:24.587105       6 log.go:172] (0xc00196ea50) (0xc0009170e0) Stream added, broadcasting: 5
I0705 13:47:24.588109       6 log.go:172] (0xc00196ea50) Reply frame received for 5
I0705 13:47:24.658668       6 log.go:172] (0xc00196ea50) Data frame received for 3
I0705 13:47:24.658745       6 log.go:172] (0xc000917040) (3) Data frame handling
I0705 13:47:24.658779       6 log.go:172] (0xc000917040) (3) Data frame sent
I0705 13:47:24.659244       6 log.go:172] (0xc00196ea50) Data frame received for 5
I0705 13:47:24.659258       6 log.go:172] (0xc0009170e0) (5) Data frame handling
I0705 13:47:24.659280       6 log.go:172] (0xc00196ea50) Data frame received for 3
I0705 13:47:24.659287       6 log.go:172] (0xc000917040) (3) Data frame handling
I0705 13:47:24.662539       6 log.go:172] (0xc00196ea50) Data frame received for 1
I0705 13:47:24.662573       6 log.go:172] (0xc000916fa0) (1) Data frame handling
I0705 13:47:24.662597       6 log.go:172] (0xc000916fa0) (1) Data frame sent
I0705 13:47:24.662610       6 log.go:172] (0xc00196ea50) (0xc000916fa0) Stream removed, broadcasting: 1
I0705 13:47:24.662628       6 log.go:172] (0xc00196ea50) Go away received
I0705 13:47:24.662732       6 log.go:172] (0xc00196ea50) (0xc000916fa0) Stream removed, broadcasting: 1
I0705 13:47:24.662757       6 log.go:172] (0xc00196ea50) (0xc000917040) Stream removed, broadcasting: 3
I0705 13:47:24.662769       6 log.go:172] (0xc00196ea50) (0xc0009170e0) Stream removed, broadcasting: 5
Jul  5 13:47:24.662: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Jul  5 13:47:24.662: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4301 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  5 13:47:24.662: INFO: >>> kubeConfig: /root/.kube/config
I0705 13:47:24.692403       6 log.go:172] (0xc001176a50) (0xc000f5a820) Create stream
I0705 13:47:24.692427       6 log.go:172] (0xc001176a50) (0xc000f5a820) Stream added, broadcasting: 1
I0705 13:47:24.694746       6 log.go:172] (0xc001176a50) Reply frame received for 1
I0705 13:47:24.694789       6 log.go:172] (0xc001176a50) (0xc000f5a8c0) Create stream
I0705 13:47:24.694803       6 log.go:172] (0xc001176a50) (0xc000f5a8c0) Stream added, broadcasting: 3
I0705 13:47:24.695803       6 log.go:172] (0xc001176a50) Reply frame received for 3
I0705 13:47:24.695840       6 log.go:172] (0xc001176a50) (0xc002e78000) Create stream
I0705 13:47:24.695857       6 log.go:172] (0xc001176a50) (0xc002e78000) Stream added, broadcasting: 5
I0705 13:47:24.696665       6 log.go:172] (0xc001176a50) Reply frame received for 5
I0705 13:47:24.757089       6 log.go:172] (0xc001176a50) Data frame received for 5
I0705 13:47:24.757284       6 log.go:172] (0xc002e78000) (5) Data frame handling
I0705 13:47:24.757320       6 log.go:172] (0xc001176a50) Data frame received for 3
I0705 13:47:24.757328       6 log.go:172] (0xc000f5a8c0) (3) Data frame handling
I0705 13:47:24.757345       6 log.go:172] (0xc000f5a8c0) (3) Data frame sent
I0705 13:47:24.757361       6 log.go:172] (0xc001176a50) Data frame received for 3
I0705 13:47:24.757368       6 log.go:172] (0xc000f5a8c0) (3) Data frame handling
I0705 13:47:24.758602       6 log.go:172] (0xc001176a50) Data frame received for 1
I0705 13:47:24.758623       6 log.go:172] (0xc000f5a820) (1) Data frame handling
I0705 13:47:24.758637       6 log.go:172] (0xc000f5a820) (1) Data frame sent
I0705 13:47:24.758713       6 log.go:172] (0xc001176a50) (0xc000f5a820) Stream removed, broadcasting: 1
I0705 13:47:24.758794       6 log.go:172] (0xc001176a50) Go away received
I0705 13:47:24.758896       6 log.go:172] (0xc001176a50) (0xc000f5a820) Stream removed, broadcasting: 1
I0705 13:47:24.758928       6 log.go:172] (0xc001176a50) (0xc000f5a8c0) Stream removed, broadcasting: 3
I0705 13:47:24.758945       6 log.go:172] (0xc001176a50) (0xc002e78000) Stream removed, broadcasting: 5
Jul  5 13:47:24.758: INFO: Exec stderr: ""
Jul  5 13:47:24.758: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4301 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  5 13:47:24.759: INFO: >>> kubeConfig: /root/.kube/config
I0705 13:47:24.792311       6 log.go:172] (0xc00196fc30) (0xc000917900) Create stream
I0705 13:47:24.792341       6 log.go:172] (0xc00196fc30) (0xc000917900) Stream added, broadcasting: 1
I0705 13:47:24.795320       6 log.go:172] (0xc00196fc30) Reply frame received for 1
I0705 13:47:24.795361       6 log.go:172] (0xc00196fc30) (0xc0009179a0) Create stream
I0705 13:47:24.795376       6 log.go:172] (0xc00196fc30) (0xc0009179a0) Stream added, broadcasting: 3
I0705 13:47:24.796376       6 log.go:172] (0xc00196fc30) Reply frame received for 3
I0705 13:47:24.796417       6 log.go:172] (0xc00196fc30) (0xc002e780a0) Create stream
I0705 13:47:24.796432       6 log.go:172] (0xc00196fc30) (0xc002e780a0) Stream added, broadcasting: 5
I0705 13:47:24.797657       6 log.go:172] (0xc00196fc30) Reply frame received for 5
I0705 13:47:24.859242       6 log.go:172] (0xc00196fc30) Data frame received for 3
I0705 13:47:24.859286       6 log.go:172] (0xc0009179a0) (3) Data frame handling
I0705 13:47:24.859314       6 log.go:172] (0xc0009179a0) (3) Data frame sent
I0705 13:47:24.859325       6 log.go:172] (0xc00196fc30) Data frame received for 3
I0705 13:47:24.859338       6 log.go:172] (0xc0009179a0) (3) Data frame handling
I0705 13:47:24.859405       6 log.go:172] (0xc00196fc30) Data frame received for 5
I0705 13:47:24.859471       6 log.go:172] (0xc002e780a0) (5) Data frame handling
I0705 13:47:24.860871       6 log.go:172] (0xc00196fc30) Data frame received for 1
I0705 13:47:24.860915       6 log.go:172] (0xc000917900) (1) Data frame handling
I0705 13:47:24.860945       6 log.go:172] (0xc000917900) (1) Data frame sent
I0705 13:47:24.860970       6 log.go:172] (0xc00196fc30) (0xc000917900) Stream removed, broadcasting: 1
I0705 13:47:24.860999       6 log.go:172] (0xc00196fc30) Go away received
I0705 13:47:24.861315       6 log.go:172] (0xc00196fc30) (0xc000917900) Stream removed, broadcasting: 1
I0705 13:47:24.861341       6 log.go:172] (0xc00196fc30) (0xc0009179a0) Stream removed, broadcasting: 3
I0705 13:47:24.861353       6 log.go:172] (0xc00196fc30) (0xc002e780a0) Stream removed, broadcasting: 5
Jul  5 13:47:24.861: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Jul  5 13:47:24.861: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4301 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  5 13:47:24.861: INFO: >>> kubeConfig: /root/.kube/config
I0705 13:47:24.903478       6 log.go:172] (0xc001563130) (0xc000fd8640) Create stream
I0705 13:47:24.903505       6 log.go:172] (0xc001563130) (0xc000fd8640) Stream added, broadcasting: 1
I0705 13:47:24.906206       6 log.go:172] (0xc001563130) Reply frame received for 1
I0705 13:47:24.906237       6 log.go:172] (0xc001563130) (0xc000917ae0) Create stream
I0705 13:47:24.906244       6 log.go:172] (0xc001563130) (0xc000917ae0) Stream added, broadcasting: 3
I0705 13:47:24.907211       6 log.go:172] (0xc001563130) Reply frame received for 3
I0705 13:47:24.907262       6 log.go:172] (0xc001563130) (0xc000f5a960) Create stream
I0705 13:47:24.907276       6 log.go:172] (0xc001563130) (0xc000f5a960) Stream added, broadcasting: 5
I0705 13:47:24.908242       6 log.go:172] (0xc001563130) Reply frame received for 5
I0705 13:47:24.981951       6 log.go:172] (0xc001563130) Data frame received for 5
I0705 13:47:24.981988       6 log.go:172] (0xc000f5a960) (5) Data frame handling
I0705 13:47:24.982044       6 log.go:172] (0xc001563130) Data frame received for 3
I0705 13:47:24.982084       6 log.go:172] (0xc000917ae0) (3) Data frame handling
I0705 13:47:24.982104       6 log.go:172] (0xc000917ae0) (3) Data frame sent
I0705 13:47:24.982118       6 log.go:172] (0xc001563130) Data frame received for 3
I0705 13:47:24.982127       6 log.go:172] (0xc000917ae0) (3) Data frame handling
I0705 13:47:24.983607       6 log.go:172] (0xc001563130) Data frame received for 1
I0705 13:47:24.983631       6 log.go:172] (0xc000fd8640) (1) Data frame handling
I0705 13:47:24.983655       6 log.go:172] (0xc000fd8640) (1) Data frame sent
I0705 13:47:24.983668       6 log.go:172] (0xc001563130) (0xc000fd8640) Stream removed, broadcasting: 1
I0705 13:47:24.983679       6 log.go:172] (0xc001563130) Go away received
I0705 13:47:24.983834       6 log.go:172] (0xc001563130) (0xc000fd8640) Stream removed, broadcasting: 1
I0705 13:47:24.983851       6 log.go:172] (0xc001563130) (0xc000917ae0) Stream removed, broadcasting: 3
I0705 13:47:24.983862       6 log.go:172] (0xc001563130) (0xc000f5a960) Stream removed, broadcasting: 5
Jul  5 13:47:24.983: INFO: Exec stderr: ""
Jul  5 13:47:24.983: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4301 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  5 13:47:24.983: INFO: >>> kubeConfig: /root/.kube/config
I0705 13:47:25.007536       6 log.go:172] (0xc00204e580) (0xc002e785a0) Create stream
I0705 13:47:25.007569       6 log.go:172] (0xc00204e580) (0xc002e785a0) Stream added, broadcasting: 1
I0705 13:47:25.013782       6 log.go:172] (0xc00204e580) Reply frame received for 1
I0705 13:47:25.013822       6 log.go:172] (0xc00204e580) (0xc002e78640) Create stream
I0705 13:47:25.013833       6 log.go:172] (0xc00204e580) (0xc002e78640) Stream added, broadcasting: 3
I0705 13:47:25.014635       6 log.go:172] (0xc00204e580) Reply frame received for 3
I0705 13:47:25.014675       6 log.go:172] (0xc00204e580) (0xc000fd8780) Create stream
I0705 13:47:25.014692       6 log.go:172] (0xc00204e580) (0xc000fd8780) Stream added, broadcasting: 5
I0705 13:47:25.015368       6 log.go:172] (0xc00204e580) Reply frame received for 5
I0705 13:47:25.063773       6 log.go:172] (0xc00204e580) Data frame received for 5
I0705 13:47:25.063829       6 log.go:172] (0xc000fd8780) (5) Data frame handling
I0705 13:47:25.063889       6 log.go:172] (0xc00204e580) Data frame received for 3
I0705 13:47:25.063905       6 log.go:172] (0xc002e78640) (3) Data frame handling
I0705 13:47:25.063913       6 log.go:172] (0xc002e78640) (3) Data frame sent
I0705 13:47:25.063928       6 log.go:172] (0xc00204e580) Data frame received for 3
I0705 13:47:25.063935       6 log.go:172] (0xc002e78640) (3) Data frame handling
I0705 13:47:25.064882       6 log.go:172] (0xc00204e580) Data frame received for 1
I0705 13:47:25.064902       6 log.go:172] (0xc002e785a0) (1) Data frame handling
I0705 13:47:25.064909       6 log.go:172] (0xc002e785a0) (1) Data frame sent
I0705 13:47:25.064921       6 log.go:172] (0xc00204e580) (0xc002e785a0) Stream removed, broadcasting: 1
I0705 13:47:25.064932       6 log.go:172] (0xc00204e580) Go away received
I0705 13:47:25.065004       6 log.go:172] (0xc00204e580) (0xc002e785a0) Stream removed, broadcasting: 1
I0705 13:47:25.065019       6 log.go:172] (0xc00204e580) (0xc002e78640) Stream removed, broadcasting: 3
I0705 13:47:25.065028       6 log.go:172] (0xc00204e580) (0xc000fd8780) Stream removed, broadcasting: 5
Jul  5 13:47:25.065: INFO: Exec stderr: ""
Jul  5 13:47:25.065: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4301 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  5 13:47:25.065: INFO: >>> kubeConfig: /root/.kube/config
I0705 13:47:25.096207       6 log.go:172] (0xc00241a000) (0xc000fd8dc0) Create stream
I0705 13:47:25.096235       6 log.go:172] (0xc00241a000) (0xc000fd8dc0) Stream added, broadcasting: 1
I0705 13:47:25.098571       6 log.go:172] (0xc00241a000) Reply frame received for 1
I0705 13:47:25.098609       6 log.go:172] (0xc00241a000) (0xc002e786e0) Create stream
I0705 13:47:25.098626       6 log.go:172] (0xc00241a000) (0xc002e786e0) Stream added, broadcasting: 3
I0705 13:47:25.099553       6 log.go:172] (0xc00241a000) Reply frame received for 3
I0705 13:47:25.099593       6 log.go:172] (0xc00241a000) (0xc000f5aa00) Create stream
I0705 13:47:25.099609       6 log.go:172] (0xc00241a000) (0xc000f5aa00) Stream added, broadcasting: 5
I0705 13:47:25.100607       6 log.go:172] (0xc00241a000) Reply frame received for 5
I0705 13:47:25.150068       6 log.go:172] (0xc00241a000) Data frame received for 5
I0705 13:47:25.150121       6 log.go:172] (0xc00241a000) Data frame received for 3
I0705 13:47:25.150181       6 log.go:172] (0xc002e786e0) (3) Data frame handling
I0705 13:47:25.150221       6 log.go:172] (0xc002e786e0) (3) Data frame sent
I0705 13:47:25.150242       6 log.go:172] (0xc00241a000) Data frame received for 3
I0705 13:47:25.150262       6 log.go:172] (0xc002e786e0) (3) Data frame handling
I0705 13:47:25.150288       6 log.go:172] (0xc000f5aa00) (5) Data frame handling
I0705 13:47:25.151502       6 log.go:172] (0xc00241a000) Data frame received for 1
I0705 13:47:25.151525       6 log.go:172] (0xc000fd8dc0) (1) Data frame handling
I0705 13:47:25.151549       6 log.go:172] (0xc000fd8dc0) (1) Data frame sent
I0705 13:47:25.151580       6 log.go:172] (0xc00241a000) (0xc000fd8dc0) Stream removed, broadcasting: 1
I0705 13:47:25.151641       6 log.go:172] (0xc00241a000) Go away received
I0705 13:47:25.151689       6 log.go:172] (0xc00241a000) (0xc000fd8dc0) Stream removed, broadcasting: 1
I0705 13:47:25.151724       6 log.go:172] (0xc00241a000) (0xc002e786e0) Stream removed, broadcasting: 3
I0705 13:47:25.151747       6 log.go:172] (0xc00241a000) (0xc000f5aa00) Stream removed, broadcasting: 5
Jul  5 13:47:25.151: INFO: Exec stderr: ""
Jul  5 13:47:25.151: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4301 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul  5 13:47:25.151: INFO: >>> kubeConfig: /root/.kube/config
I0705 13:47:25.188890       6 log.go:172] (0xc0022b6d10) (0xc000f5af00) Create stream
I0705 13:47:25.188917       6 log.go:172] (0xc0022b6d10) (0xc000f5af00) Stream added, broadcasting: 1
I0705 13:47:25.191632       6 log.go:172] (0xc0022b6d10) Reply frame received for 1
I0705 13:47:25.191665       6 log.go:172] (0xc0022b6d10) (0xc000917ea0) Create stream
I0705 13:47:25.191678       6 log.go:172] (0xc0022b6d10) (0xc000917ea0) Stream added, broadcasting: 3
I0705 13:47:25.192599       6 log.go:172] (0xc0022b6d10) Reply frame received for 3
I0705 13:47:25.192638       6 log.go:172] (0xc0022b6d10) (0xc002e78780) Create stream
I0705 13:47:25.192652       6 log.go:172] (0xc0022b6d10) (0xc002e78780) Stream added, broadcasting: 5
I0705 13:47:25.193585       6 log.go:172] (0xc0022b6d10) Reply frame received for 5
I0705 13:47:25.262477       6 log.go:172] (0xc0022b6d10) Data frame received for 3
I0705 13:47:25.262533       6 log.go:172] (0xc000917ea0) (3) Data frame handling
I0705 13:47:25.262547       6 log.go:172] (0xc000917ea0) (3) Data frame sent
I0705 13:47:25.262559       6 log.go:172] (0xc0022b6d10) Data frame received for 3
I0705 13:47:25.262579       6 log.go:172] (0xc0022b6d10) Data frame received for 5
I0705 13:47:25.262604       6 log.go:172] (0xc002e78780) (5) Data frame handling
I0705 13:47:25.262626       6 log.go:172] (0xc000917ea0) (3) Data frame handling
I0705 13:47:25.264650       6 log.go:172] (0xc0022b6d10) Data frame received for 1
I0705 13:47:25.264678       6 log.go:172] (0xc000f5af00) (1) Data frame handling
I0705 13:47:25.264703       6 log.go:172] (0xc000f5af00) (1) Data frame sent
I0705 13:47:25.264722       6 log.go:172] (0xc0022b6d10) (0xc000f5af00) Stream removed, broadcasting: 1
I0705 13:47:25.264741       6 log.go:172] (0xc0022b6d10) Go away received
I0705 13:47:25.264878       6 log.go:172] (0xc0022b6d10) (0xc000f5af00) Stream removed, broadcasting: 1
I0705 13:47:25.264895       6 log.go:172] (0xc0022b6d10) (0xc000917ea0) Stream removed, broadcasting: 3
I0705 13:47:25.264903       6 log.go:172] (0xc0022b6d10) (0xc002e78780) Stream removed, broadcasting: 5
Jul  5 13:47:25.264: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:47:25.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-4301" for this suite.
Jul  5 13:48:07.311: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:48:07.387: INFO: namespace e2e-kubelet-etc-hosts-4301 deletion completed in 42.110723436s

• [SLOW TEST:53.271 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:48:07.387: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Jul  5 13:48:07.475: INFO: Waiting up to 5m0s for pod "pod-884b89ec-b678-4180-be7c-d04980eb13aa" in namespace "emptydir-9927" to be "success or failure"
Jul  5 13:48:07.477: INFO: Pod "pod-884b89ec-b678-4180-be7c-d04980eb13aa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.459198ms
Jul  5 13:48:09.492: INFO: Pod "pod-884b89ec-b678-4180-be7c-d04980eb13aa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017261387s
Jul  5 13:48:11.496: INFO: Pod "pod-884b89ec-b678-4180-be7c-d04980eb13aa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020748488s
STEP: Saw pod success
Jul  5 13:48:11.496: INFO: Pod "pod-884b89ec-b678-4180-be7c-d04980eb13aa" satisfied condition "success or failure"
Jul  5 13:48:11.499: INFO: Trying to get logs from node iruya-worker pod pod-884b89ec-b678-4180-be7c-d04980eb13aa container test-container: 
STEP: delete the pod
Jul  5 13:48:11.533: INFO: Waiting for pod pod-884b89ec-b678-4180-be7c-d04980eb13aa to disappear
Jul  5 13:48:11.551: INFO: Pod pod-884b89ec-b678-4180-be7c-d04980eb13aa no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:48:11.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9927" for this suite.
Jul  5 13:48:19.591: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:48:19.668: INFO: namespace emptydir-9927 deletion completed in 8.112642176s

• [SLOW TEST:12.280 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:48:19.668: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating replication controller svc-latency-rc in namespace svc-latency-3474
I0705 13:48:19.726469       6 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-3474, replica count: 1
I0705 13:48:20.776953       6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0705 13:48:21.777418       6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0705 13:48:22.777678       6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jul  5 13:48:22.914: INFO: Created: latency-svc-jkhpf
Jul  5 13:48:22.929: INFO: Got endpoints: latency-svc-jkhpf [51.580194ms]
Jul  5 13:48:22.974: INFO: Created: latency-svc-vdw2j
Jul  5 13:48:22.978: INFO: Got endpoints: latency-svc-vdw2j [48.754393ms]
Jul  5 13:48:23.007: INFO: Created: latency-svc-8k84z
Jul  5 13:48:23.014: INFO: Got endpoints: latency-svc-8k84z [84.98349ms]
Jul  5 13:48:23.038: INFO: Created: latency-svc-2b28h
Jul  5 13:48:23.074: INFO: Got endpoints: latency-svc-2b28h [145.176002ms]
Jul  5 13:48:23.091: INFO: Created: latency-svc-8sm5d
Jul  5 13:48:23.111: INFO: Got endpoints: latency-svc-8sm5d [181.608445ms]
Jul  5 13:48:23.136: INFO: Created: latency-svc-6wbpz
Jul  5 13:48:23.155: INFO: Got endpoints: latency-svc-6wbpz [226.062612ms]
Jul  5 13:48:23.272: INFO: Created: latency-svc-h78lx
Jul  5 13:48:23.279: INFO: Got endpoints: latency-svc-h78lx [350.188146ms]
Jul  5 13:48:23.331: INFO: Created: latency-svc-mvvwh
Jul  5 13:48:23.346: INFO: Got endpoints: latency-svc-mvvwh [416.162991ms]
Jul  5 13:48:23.368: INFO: Created: latency-svc-4pl46
Jul  5 13:48:23.440: INFO: Got endpoints: latency-svc-4pl46 [510.449559ms]
Jul  5 13:48:23.444: INFO: Created: latency-svc-5sghv
Jul  5 13:48:23.454: INFO: Got endpoints: latency-svc-5sghv [524.648339ms]
Jul  5 13:48:23.497: INFO: Created: latency-svc-rqq6r
Jul  5 13:48:23.508: INFO: Got endpoints: latency-svc-rqq6r [578.485106ms]
Jul  5 13:48:23.531: INFO: Created: latency-svc-gpptf
Jul  5 13:48:23.595: INFO: Got endpoints: latency-svc-gpptf [665.421327ms]
Jul  5 13:48:23.598: INFO: Created: latency-svc-phn58
Jul  5 13:48:23.604: INFO: Got endpoints: latency-svc-phn58 [675.131945ms]
Jul  5 13:48:23.628: INFO: Created: latency-svc-rw4f5
Jul  5 13:48:23.640: INFO: Got endpoints: latency-svc-rw4f5 [711.140805ms]
Jul  5 13:48:23.665: INFO: Created: latency-svc-tgvrf
Jul  5 13:48:23.677: INFO: Got endpoints: latency-svc-tgvrf [747.85323ms]
Jul  5 13:48:23.739: INFO: Created: latency-svc-v6x7c
Jul  5 13:48:23.741: INFO: Got endpoints: latency-svc-v6x7c [811.749131ms]
Jul  5 13:48:23.777: INFO: Created: latency-svc-b4v68
Jul  5 13:48:23.785: INFO: Got endpoints: latency-svc-b4v68 [807.156516ms]
Jul  5 13:48:23.835: INFO: Created: latency-svc-ggfqj
Jul  5 13:48:23.888: INFO: Got endpoints: latency-svc-ggfqj [874.241073ms]
Jul  5 13:48:23.901: INFO: Created: latency-svc-cggk5
Jul  5 13:48:23.918: INFO: Got endpoints: latency-svc-cggk5 [843.714182ms]
Jul  5 13:48:23.945: INFO: Created: latency-svc-rgx9q
Jul  5 13:48:23.975: INFO: Got endpoints: latency-svc-rgx9q [864.097451ms]
Jul  5 13:48:24.051: INFO: Created: latency-svc-dxrw4
Jul  5 13:48:24.056: INFO: Got endpoints: latency-svc-dxrw4 [900.680555ms]
Jul  5 13:48:24.082: INFO: Created: latency-svc-lmwdp
Jul  5 13:48:24.098: INFO: Got endpoints: latency-svc-lmwdp [818.979584ms]
Jul  5 13:48:24.142: INFO: Created: latency-svc-6jj47
Jul  5 13:48:24.241: INFO: Got endpoints: latency-svc-6jj47 [895.928678ms]
Jul  5 13:48:24.243: INFO: Created: latency-svc-qbw55
Jul  5 13:48:24.279: INFO: Got endpoints: latency-svc-qbw55 [839.719648ms]
Jul  5 13:48:24.303: INFO: Created: latency-svc-8xhcw
Jul  5 13:48:24.334: INFO: Got endpoints: latency-svc-8xhcw [879.558316ms]
Jul  5 13:48:24.410: INFO: Created: latency-svc-xdnkt
Jul  5 13:48:24.417: INFO: Got endpoints: latency-svc-xdnkt [909.241703ms]
Jul  5 13:48:24.437: INFO: Created: latency-svc-9ffkh
Jul  5 13:48:24.461: INFO: Got endpoints: latency-svc-9ffkh [865.681343ms]
Jul  5 13:48:24.492: INFO: Created: latency-svc-744rr
Jul  5 13:48:24.502: INFO: Got endpoints: latency-svc-744rr [897.247028ms]
Jul  5 13:48:24.554: INFO: Created: latency-svc-lkslb
Jul  5 13:48:24.556: INFO: Got endpoints: latency-svc-lkslb [915.888153ms]
Jul  5 13:48:24.585: INFO: Created: latency-svc-fj9c6
Jul  5 13:48:24.598: INFO: Got endpoints: latency-svc-fj9c6 [921.120374ms]
Jul  5 13:48:24.623: INFO: Created: latency-svc-ddzzh
Jul  5 13:48:24.640: INFO: Got endpoints: latency-svc-ddzzh [899.226607ms]
Jul  5 13:48:24.685: INFO: Created: latency-svc-dpjhz
Jul  5 13:48:24.688: INFO: Got endpoints: latency-svc-dpjhz [903.030056ms]
Jul  5 13:48:24.731: INFO: Created: latency-svc-f46hz
Jul  5 13:48:24.761: INFO: Got endpoints: latency-svc-f46hz [872.596834ms]
Jul  5 13:48:24.829: INFO: Created: latency-svc-twmzx
Jul  5 13:48:24.855: INFO: Created: latency-svc-8k9mq
Jul  5 13:48:24.855: INFO: Got endpoints: latency-svc-twmzx [936.597645ms]
Jul  5 13:48:24.884: INFO: Got endpoints: latency-svc-8k9mq [909.249873ms]
Jul  5 13:48:24.921: INFO: Created: latency-svc-zqskq
Jul  5 13:48:24.960: INFO: Got endpoints: latency-svc-zqskq [903.881326ms]
Jul  5 13:48:24.970: INFO: Created: latency-svc-nt8pm
Jul  5 13:48:24.984: INFO: Got endpoints: latency-svc-nt8pm [885.21383ms]
Jul  5 13:48:25.007: INFO: Created: latency-svc-rstf8
Jul  5 13:48:25.020: INFO: Got endpoints: latency-svc-rstf8 [778.372402ms]
Jul  5 13:48:25.043: INFO: Created: latency-svc-pvb7f
Jul  5 13:48:25.056: INFO: Got endpoints: latency-svc-pvb7f [776.894258ms]
Jul  5 13:48:25.104: INFO: Created: latency-svc-c5bmr
Jul  5 13:48:25.111: INFO: Got endpoints: latency-svc-c5bmr [776.960889ms]
Jul  5 13:48:25.130: INFO: Created: latency-svc-cd98l
Jul  5 13:48:25.147: INFO: Got endpoints: latency-svc-cd98l [729.397177ms]
Jul  5 13:48:25.169: INFO: Created: latency-svc-ndfnw
Jul  5 13:48:25.189: INFO: Got endpoints: latency-svc-ndfnw [728.483294ms]
Jul  5 13:48:25.266: INFO: Created: latency-svc-92gqt
Jul  5 13:48:25.304: INFO: Created: latency-svc-2z66g
Jul  5 13:48:25.304: INFO: Got endpoints: latency-svc-92gqt [802.22328ms]
Jul  5 13:48:25.322: INFO: Got endpoints: latency-svc-2z66g [765.412533ms]
Jul  5 13:48:25.352: INFO: Created: latency-svc-8zh7j
Jul  5 13:48:25.409: INFO: Got endpoints: latency-svc-8zh7j [810.663945ms]
Jul  5 13:48:25.451: INFO: Created: latency-svc-qps4f
Jul  5 13:48:25.487: INFO: Got endpoints: latency-svc-qps4f [846.262148ms]
Jul  5 13:48:25.565: INFO: Created: latency-svc-rbzg2
Jul  5 13:48:25.567: INFO: Got endpoints: latency-svc-rbzg2 [879.209627ms]
Jul  5 13:48:25.599: INFO: Created: latency-svc-f2msn
Jul  5 13:48:25.616: INFO: Got endpoints: latency-svc-f2msn [855.124219ms]
Jul  5 13:48:25.640: INFO: Created: latency-svc-bhbqb
Jul  5 13:48:25.658: INFO: Got endpoints: latency-svc-bhbqb [803.475307ms]
Jul  5 13:48:25.714: INFO: Created: latency-svc-zrs2r
Jul  5 13:48:25.718: INFO: Got endpoints: latency-svc-zrs2r [833.860558ms]
Jul  5 13:48:25.738: INFO: Created: latency-svc-bm5w7
Jul  5 13:48:25.749: INFO: Got endpoints: latency-svc-bm5w7 [789.100633ms]
Jul  5 13:48:25.768: INFO: Created: latency-svc-hkkzq
Jul  5 13:48:25.779: INFO: Got endpoints: latency-svc-hkkzq [795.150823ms]
Jul  5 13:48:25.803: INFO: Created: latency-svc-s652r
Jul  5 13:48:25.865: INFO: Got endpoints: latency-svc-s652r [844.493285ms]
Jul  5 13:48:25.866: INFO: Created: latency-svc-929fx
Jul  5 13:48:25.876: INFO: Got endpoints: latency-svc-929fx [819.184338ms]
Jul  5 13:48:25.904: INFO: Created: latency-svc-xt6ml
Jul  5 13:48:25.936: INFO: Got endpoints: latency-svc-xt6ml [825.600104ms]
Jul  5 13:48:26.021: INFO: Created: latency-svc-d5kbr
Jul  5 13:48:26.048: INFO: Got endpoints: latency-svc-d5kbr [901.134176ms]
Jul  5 13:48:26.078: INFO: Created: latency-svc-nx9tx
Jul  5 13:48:26.092: INFO: Got endpoints: latency-svc-nx9tx [902.714276ms]
Jul  5 13:48:26.115: INFO: Created: latency-svc-r4xnc
Jul  5 13:48:26.195: INFO: Got endpoints: latency-svc-r4xnc [890.624993ms]
Jul  5 13:48:26.200: INFO: Created: latency-svc-vv8cm
Jul  5 13:48:26.207: INFO: Got endpoints: latency-svc-vv8cm [885.007756ms]
Jul  5 13:48:26.399: INFO: Created: latency-svc-mb7z7
Jul  5 13:48:26.403: INFO: Got endpoints: latency-svc-mb7z7 [994.0727ms]
Jul  5 13:48:26.465: INFO: Created: latency-svc-kw7w2
Jul  5 13:48:26.477: INFO: Got endpoints: latency-svc-kw7w2 [990.207721ms]
Jul  5 13:48:26.565: INFO: Created: latency-svc-cvzcp
Jul  5 13:48:26.594: INFO: Got endpoints: latency-svc-cvzcp [1.026199457s]
Jul  5 13:48:26.633: INFO: Created: latency-svc-fssvx
Jul  5 13:48:26.645: INFO: Got endpoints: latency-svc-fssvx [1.028997836s]
Jul  5 13:48:26.715: INFO: Created: latency-svc-q8g6n
Jul  5 13:48:26.717: INFO: Got endpoints: latency-svc-q8g6n [1.059216879s]
Jul  5 13:48:26.759: INFO: Created: latency-svc-wsmjg
Jul  5 13:48:26.772: INFO: Got endpoints: latency-svc-wsmjg [1.053485255s]
Jul  5 13:48:26.792: INFO: Created: latency-svc-jzqfd
Jul  5 13:48:26.808: INFO: Got endpoints: latency-svc-jzqfd [1.05871085s]
Jul  5 13:48:26.859: INFO: Created: latency-svc-g5hxw
Jul  5 13:48:26.862: INFO: Got endpoints: latency-svc-g5hxw [1.083319287s]
Jul  5 13:48:26.889: INFO: Created: latency-svc-g56ll
Jul  5 13:48:26.905: INFO: Got endpoints: latency-svc-g56ll [1.039949941s]
Jul  5 13:48:26.926: INFO: Created: latency-svc-m5qmn
Jul  5 13:48:26.941: INFO: Got endpoints: latency-svc-m5qmn [1.065353741s]
Jul  5 13:48:27.002: INFO: Created: latency-svc-fs658
Jul  5 13:48:27.005: INFO: Got endpoints: latency-svc-fs658 [1.068932961s]
Jul  5 13:48:27.062: INFO: Created: latency-svc-s49rp
Jul  5 13:48:27.092: INFO: Got endpoints: latency-svc-s49rp [1.044168671s]
Jul  5 13:48:27.152: INFO: Created: latency-svc-pvr8q
Jul  5 13:48:27.157: INFO: Got endpoints: latency-svc-pvr8q [1.065204372s]
Jul  5 13:48:27.184: INFO: Created: latency-svc-wvcqq
Jul  5 13:48:27.200: INFO: Got endpoints: latency-svc-wvcqq [1.004961491s]
Jul  5 13:48:27.221: INFO: Created: latency-svc-qfmkc
Jul  5 13:48:27.230: INFO: Got endpoints: latency-svc-qfmkc [1.022897243s]
Jul  5 13:48:27.250: INFO: Created: latency-svc-d68fx
Jul  5 13:48:27.283: INFO: Got endpoints: latency-svc-d68fx [879.840596ms]
Jul  5 13:48:27.295: INFO: Created: latency-svc-vh2zb
Jul  5 13:48:27.314: INFO: Got endpoints: latency-svc-vh2zb [837.118244ms]
Jul  5 13:48:27.338: INFO: Created: latency-svc-w8mhh
Jul  5 13:48:27.357: INFO: Got endpoints: latency-svc-w8mhh [762.817259ms]
Jul  5 13:48:27.379: INFO: Created: latency-svc-czwpf
Jul  5 13:48:27.415: INFO: Got endpoints: latency-svc-czwpf [769.591699ms]
Jul  5 13:48:27.442: INFO: Created: latency-svc-nccq8
Jul  5 13:48:27.459: INFO: Got endpoints: latency-svc-nccq8 [741.754021ms]
Jul  5 13:48:27.484: INFO: Created: latency-svc-g7mwl
Jul  5 13:48:27.496: INFO: Got endpoints: latency-svc-g7mwl [723.653922ms]
Jul  5 13:48:27.559: INFO: Created: latency-svc-c4wjn
Jul  5 13:48:27.583: INFO: Got endpoints: latency-svc-c4wjn [774.936677ms]
Jul  5 13:48:27.613: INFO: Created: latency-svc-52629
Jul  5 13:48:27.634: INFO: Got endpoints: latency-svc-52629 [772.063642ms]
Jul  5 13:48:27.649: INFO: Created: latency-svc-s9vzv
Jul  5 13:48:27.721: INFO: Got endpoints: latency-svc-s9vzv [815.966429ms]
Jul  5 13:48:27.723: INFO: Created: latency-svc-2v52z
Jul  5 13:48:27.730: INFO: Got endpoints: latency-svc-2v52z [788.877956ms]
Jul  5 13:48:27.754: INFO: Created: latency-svc-6b5wq
Jul  5 13:48:27.768: INFO: Got endpoints: latency-svc-6b5wq [762.603543ms]
Jul  5 13:48:27.795: INFO: Created: latency-svc-54wz5
Jul  5 13:48:27.809: INFO: Got endpoints: latency-svc-54wz5 [716.794336ms]
Jul  5 13:48:27.870: INFO: Created: latency-svc-jjx9n
Jul  5 13:48:27.902: INFO: Got endpoints: latency-svc-jjx9n [133.56627ms]
Jul  5 13:48:27.903: INFO: Created: latency-svc-8kvff
Jul  5 13:48:27.924: INFO: Got endpoints: latency-svc-8kvff [766.12036ms]
Jul  5 13:48:27.961: INFO: Created: latency-svc-95bjk
Jul  5 13:48:28.008: INFO: Got endpoints: latency-svc-95bjk [807.697619ms]
Jul  5 13:48:28.017: INFO: Created: latency-svc-dxl2l
Jul  5 13:48:28.047: INFO: Got endpoints: latency-svc-dxl2l [817.517734ms]
Jul  5 13:48:28.087: INFO: Created: latency-svc-kkr9g
Jul  5 13:48:28.104: INFO: Got endpoints: latency-svc-kkr9g [820.670297ms]
Jul  5 13:48:28.152: INFO: Created: latency-svc-p2vt4
Jul  5 13:48:28.158: INFO: Got endpoints: latency-svc-p2vt4 [843.770644ms]
Jul  5 13:48:28.186: INFO: Created: latency-svc-9mgs9
Jul  5 13:48:28.201: INFO: Got endpoints: latency-svc-9mgs9 [843.97226ms]
Jul  5 13:48:28.233: INFO: Created: latency-svc-x6jtm
Jul  5 13:48:28.242: INFO: Got endpoints: latency-svc-x6jtm [827.300249ms]
Jul  5 13:48:28.303: INFO: Created: latency-svc-fcf8t
Jul  5 13:48:28.315: INFO: Got endpoints: latency-svc-fcf8t [855.375539ms]
Jul  5 13:48:28.345: INFO: Created: latency-svc-5225q
Jul  5 13:48:28.351: INFO: Got endpoints: latency-svc-5225q [855.337628ms]
Jul  5 13:48:28.395: INFO: Created: latency-svc-9fg26
Jul  5 13:48:28.449: INFO: Got endpoints: latency-svc-9fg26 [866.086458ms]
Jul  5 13:48:28.489: INFO: Created: latency-svc-77mw9
Jul  5 13:48:28.508: INFO: Got endpoints: latency-svc-77mw9 [873.225534ms]
Jul  5 13:48:28.589: INFO: Created: latency-svc-6qjsf
Jul  5 13:48:28.610: INFO: Got endpoints: latency-svc-6qjsf [889.157913ms]
Jul  5 13:48:28.647: INFO: Created: latency-svc-26264
Jul  5 13:48:28.688: INFO: Got endpoints: latency-svc-26264 [958.194712ms]
Jul  5 13:48:28.759: INFO: Created: latency-svc-hfkd4
Jul  5 13:48:28.778: INFO: Got endpoints: latency-svc-hfkd4 [969.02063ms]
Jul  5 13:48:28.797: INFO: Created: latency-svc-cn8xf
Jul  5 13:48:28.814: INFO: Got endpoints: latency-svc-cn8xf [912.725244ms]
Jul  5 13:48:28.864: INFO: Created: latency-svc-4vr5v
Jul  5 13:48:28.881: INFO: Got endpoints: latency-svc-4vr5v [957.883445ms]
Jul  5 13:48:28.915: INFO: Created: latency-svc-2wfbw
Jul  5 13:48:28.929: INFO: Got endpoints: latency-svc-2wfbw [921.442181ms]
Jul  5 13:48:28.963: INFO: Created: latency-svc-4zzjb
Jul  5 13:48:29.002: INFO: Got endpoints: latency-svc-4zzjb [954.382782ms]
Jul  5 13:48:29.017: INFO: Created: latency-svc-pbvq8
Jul  5 13:48:29.031: INFO: Got endpoints: latency-svc-pbvq8 [927.367409ms]
Jul  5 13:48:29.061: INFO: Created: latency-svc-cfv2s
Jul  5 13:48:29.080: INFO: Got endpoints: latency-svc-cfv2s [921.70546ms]
Jul  5 13:48:30.231: INFO: Created: latency-svc-grwl2
Jul  5 13:48:31.554: INFO: Created: latency-svc-zzd95
Jul  5 13:48:31.558: INFO: Got endpoints: latency-svc-grwl2 [3.356667606s]
Jul  5 13:48:31.562: INFO: Got endpoints: latency-svc-zzd95 [3.319838898s]
Jul  5 13:48:31.733: INFO: Created: latency-svc-c9nrv
Jul  5 13:48:31.736: INFO: Got endpoints: latency-svc-c9nrv [3.421143184s]
Jul  5 13:48:31.775: INFO: Created: latency-svc-92c95
Jul  5 13:48:31.794: INFO: Got endpoints: latency-svc-92c95 [3.442696094s]
Jul  5 13:48:31.871: INFO: Created: latency-svc-ffc2p
Jul  5 13:48:31.874: INFO: Got endpoints: latency-svc-ffc2p [3.424677492s]
Jul  5 13:48:31.911: INFO: Created: latency-svc-c4cgk
Jul  5 13:48:31.926: INFO: Got endpoints: latency-svc-c4cgk [3.418138156s]
Jul  5 13:48:31.947: INFO: Created: latency-svc-qfz2f
Jul  5 13:48:31.965: INFO: Got endpoints: latency-svc-qfz2f [3.355361309s]
Jul  5 13:48:32.015: INFO: Created: latency-svc-n5kx5
Jul  5 13:48:32.057: INFO: Got endpoints: latency-svc-n5kx5 [3.369051029s]
Jul  5 13:48:32.059: INFO: Created: latency-svc-kjg4x
Jul  5 13:48:32.070: INFO: Got endpoints: latency-svc-kjg4x [3.292152756s]
Jul  5 13:48:32.103: INFO: Created: latency-svc-82947
Jul  5 13:48:32.170: INFO: Got endpoints: latency-svc-82947 [3.355219949s]
Jul  5 13:48:32.183: INFO: Created: latency-svc-588cc
Jul  5 13:48:32.207: INFO: Got endpoints: latency-svc-588cc [3.325375811s]
Jul  5 13:48:32.237: INFO: Created: latency-svc-7bczc
Jul  5 13:48:32.252: INFO: Got endpoints: latency-svc-7bczc [3.322616809s]
Jul  5 13:48:32.319: INFO: Created: latency-svc-flwp5
Jul  5 13:48:32.323: INFO: Got endpoints: latency-svc-flwp5 [3.321406192s]
Jul  5 13:48:32.398: INFO: Created: latency-svc-c4tzs
Jul  5 13:48:32.414: INFO: Got endpoints: latency-svc-c4tzs [3.38250184s]
Jul  5 13:48:32.481: INFO: Created: latency-svc-frhl7
Jul  5 13:48:32.486: INFO: Got endpoints: latency-svc-frhl7 [3.406383132s]
Jul  5 13:48:32.507: INFO: Created: latency-svc-r7g6c
Jul  5 13:48:32.516: INFO: Got endpoints: latency-svc-r7g6c [958.322897ms]
Jul  5 13:48:32.553: INFO: Created: latency-svc-4kqtr
Jul  5 13:48:32.570: INFO: Got endpoints: latency-svc-4kqtr [1.008036162s]
Jul  5 13:48:32.625: INFO: Created: latency-svc-2x25l
Jul  5 13:48:32.628: INFO: Got endpoints: latency-svc-2x25l [891.919256ms]
Jul  5 13:48:32.663: INFO: Created: latency-svc-htn62
Jul  5 13:48:32.679: INFO: Got endpoints: latency-svc-htn62 [885.21361ms]
Jul  5 13:48:32.699: INFO: Created: latency-svc-tfs2g
Jul  5 13:48:32.709: INFO: Got endpoints: latency-svc-tfs2g [835.210158ms]
Jul  5 13:48:32.787: INFO: Created: latency-svc-wv59j
Jul  5 13:48:32.817: INFO: Got endpoints: latency-svc-wv59j [890.992385ms]
Jul  5 13:48:32.818: INFO: Created: latency-svc-9929p
Jul  5 13:48:32.852: INFO: Got endpoints: latency-svc-9929p [886.953771ms]
Jul  5 13:48:32.925: INFO: Created: latency-svc-kggtq
Jul  5 13:48:32.928: INFO: Got endpoints: latency-svc-kggtq [870.735332ms]
Jul  5 13:48:32.964: INFO: Created: latency-svc-c9r8s
Jul  5 13:48:32.980: INFO: Got endpoints: latency-svc-c9r8s [909.425898ms]
Jul  5 13:48:33.006: INFO: Created: latency-svc-7rzn8
Jul  5 13:48:33.016: INFO: Got endpoints: latency-svc-7rzn8 [846.223877ms]
Jul  5 13:48:33.062: INFO: Created: latency-svc-mx457
Jul  5 13:48:33.070: INFO: Got endpoints: latency-svc-mx457 [863.052702ms]
Jul  5 13:48:33.117: INFO: Created: latency-svc-lxxgb
Jul  5 13:48:33.143: INFO: Got endpoints: latency-svc-lxxgb [890.856803ms]
Jul  5 13:48:33.206: INFO: Created: latency-svc-76b5b
Jul  5 13:48:33.208: INFO: Got endpoints: latency-svc-76b5b [885.036474ms]
Jul  5 13:48:33.269: INFO: Created: latency-svc-qd94g
Jul  5 13:48:33.281: INFO: Got endpoints: latency-svc-qd94g [867.353492ms]
Jul  5 13:48:33.305: INFO: Created: latency-svc-zd9ml
Jul  5 13:48:33.362: INFO: Got endpoints: latency-svc-zd9ml [875.233621ms]
Jul  5 13:48:33.380: INFO: Created: latency-svc-bxqx6
Jul  5 13:48:33.395: INFO: Got endpoints: latency-svc-bxqx6 [879.411688ms]
Jul  5 13:48:33.417: INFO: Created: latency-svc-xz5w2
Jul  5 13:48:33.432: INFO: Got endpoints: latency-svc-xz5w2 [861.412133ms]
Jul  5 13:48:33.455: INFO: Created: latency-svc-5lsk5
Jul  5 13:48:33.493: INFO: Got endpoints: latency-svc-5lsk5 [865.034205ms]
Jul  5 13:48:33.503: INFO: Created: latency-svc-4g69k
Jul  5 13:48:33.516: INFO: Got endpoints: latency-svc-4g69k [837.163589ms]
Jul  5 13:48:33.539: INFO: Created: latency-svc-dhzm2
Jul  5 13:48:33.552: INFO: Got endpoints: latency-svc-dhzm2 [843.244403ms]
Jul  5 13:48:33.578: INFO: Created: latency-svc-gtg8g
Jul  5 13:48:33.637: INFO: Got endpoints: latency-svc-gtg8g [819.516014ms]
Jul  5 13:48:33.650: INFO: Created: latency-svc-54pv9
Jul  5 13:48:33.688: INFO: Got endpoints: latency-svc-54pv9 [835.784545ms]
Jul  5 13:48:33.811: INFO: Created: latency-svc-f8jrv
Jul  5 13:48:33.814: INFO: Got endpoints: latency-svc-f8jrv [885.841519ms]
Jul  5 13:48:33.879: INFO: Created: latency-svc-ls2z9
Jul  5 13:48:33.890: INFO: Got endpoints: latency-svc-ls2z9 [910.260532ms]
Jul  5 13:48:33.979: INFO: Created: latency-svc-wnfzl
Jul  5 13:48:33.981: INFO: Got endpoints: latency-svc-wnfzl [965.54471ms]
Jul  5 13:48:34.042: INFO: Created: latency-svc-7gpsc
Jul  5 13:48:34.058: INFO: Got endpoints: latency-svc-7gpsc [988.359001ms]
Jul  5 13:48:34.122: INFO: Created: latency-svc-wwvtd
Jul  5 13:48:34.125: INFO: Got endpoints: latency-svc-wwvtd [982.059314ms]
Jul  5 13:48:34.154: INFO: Created: latency-svc-6gc4v
Jul  5 13:48:34.167: INFO: Got endpoints: latency-svc-6gc4v [958.263949ms]
Jul  5 13:48:34.198: INFO: Created: latency-svc-vgmxk
Jul  5 13:48:34.216: INFO: Got endpoints: latency-svc-vgmxk [934.181097ms]
Jul  5 13:48:34.278: INFO: Created: latency-svc-r4p97
Jul  5 13:48:34.305: INFO: Got endpoints: latency-svc-r4p97 [943.298425ms]
Jul  5 13:48:34.352: INFO: Created: latency-svc-xlzz8
Jul  5 13:48:34.372: INFO: Got endpoints: latency-svc-xlzz8 [976.132213ms]
Jul  5 13:48:34.421: INFO: Created: latency-svc-fj8nh
Jul  5 13:48:34.444: INFO: Got endpoints: latency-svc-fj8nh [1.011941406s]
Jul  5 13:48:34.474: INFO: Created: latency-svc-skk8h
Jul  5 13:48:34.492: INFO: Got endpoints: latency-svc-skk8h [999.169032ms]
Jul  5 13:48:34.516: INFO: Created: latency-svc-8mt4m
Jul  5 13:48:34.577: INFO: Got endpoints: latency-svc-8mt4m [1.060379348s]
Jul  5 13:48:34.579: INFO: Created: latency-svc-2nwsf
Jul  5 13:48:34.588: INFO: Got endpoints: latency-svc-2nwsf [1.035769723s]
Jul  5 13:48:34.610: INFO: Created: latency-svc-2ncjn
Jul  5 13:48:34.619: INFO: Got endpoints: latency-svc-2ncjn [981.683496ms]
Jul  5 13:48:34.646: INFO: Created: latency-svc-4w9dr
Jul  5 13:48:34.655: INFO: Got endpoints: latency-svc-4w9dr [966.92123ms]
Jul  5 13:48:34.714: INFO: Created: latency-svc-ddm2c
Jul  5 13:48:34.721: INFO: Got endpoints: latency-svc-ddm2c [907.324199ms]
Jul  5 13:48:34.744: INFO: Created: latency-svc-96k9x
Jul  5 13:48:34.764: INFO: Got endpoints: latency-svc-96k9x [873.557817ms]
Jul  5 13:48:34.787: INFO: Created: latency-svc-5nvrz
Jul  5 13:48:34.876: INFO: Got endpoints: latency-svc-5nvrz [894.729926ms]
Jul  5 13:48:34.878: INFO: Created: latency-svc-slzrj
Jul  5 13:48:34.884: INFO: Got endpoints: latency-svc-slzrj [825.181414ms]
Jul  5 13:48:34.904: INFO: Created: latency-svc-5gr5w
Jul  5 13:48:34.915: INFO: Got endpoints: latency-svc-5gr5w [789.896593ms]
Jul  5 13:48:34.942: INFO: Created: latency-svc-2gcgd
Jul  5 13:48:34.957: INFO: Got endpoints: latency-svc-2gcgd [789.717325ms]
Jul  5 13:48:35.009: INFO: Created: latency-svc-d8fp6
Jul  5 13:48:35.012: INFO: Got endpoints: latency-svc-d8fp6 [796.334565ms]
Jul  5 13:48:35.038: INFO: Created: latency-svc-4r2mq
Jul  5 13:48:35.053: INFO: Got endpoints: latency-svc-4r2mq [748.109986ms]
Jul  5 13:48:35.078: INFO: Created: latency-svc-kgv7v
Jul  5 13:48:35.096: INFO: Got endpoints: latency-svc-kgv7v [724.05917ms]
Jul  5 13:48:35.140: INFO: Created: latency-svc-fsg9z
Jul  5 13:48:35.142: INFO: Got endpoints: latency-svc-fsg9z [698.396674ms]
Jul  5 13:48:35.168: INFO: Created: latency-svc-cz5dm
Jul  5 13:48:35.186: INFO: Got endpoints: latency-svc-cz5dm [693.960933ms]
Jul  5 13:48:35.212: INFO: Created: latency-svc-hpw9f
Jul  5 13:48:35.228: INFO: Got endpoints: latency-svc-hpw9f [651.699362ms]
Jul  5 13:48:35.284: INFO: Created: latency-svc-649jg
Jul  5 13:48:35.324: INFO: Got endpoints: latency-svc-649jg [735.348056ms]
Jul  5 13:48:35.360: INFO: Created: latency-svc-x4g42
Jul  5 13:48:35.372: INFO: Got endpoints: latency-svc-x4g42 [753.897301ms]
Jul  5 13:48:35.427: INFO: Created: latency-svc-dpw2j
Jul  5 13:48:35.430: INFO: Got endpoints: latency-svc-dpw2j [774.744818ms]
Jul  5 13:48:35.452: INFO: Created: latency-svc-kqbm6
Jul  5 13:48:35.469: INFO: Got endpoints: latency-svc-kqbm6 [747.987037ms]
Jul  5 13:48:35.495: INFO: Created: latency-svc-bwbrh
Jul  5 13:48:35.512: INFO: Got endpoints: latency-svc-bwbrh [748.291088ms]
Jul  5 13:48:35.577: INFO: Created: latency-svc-97z5s
Jul  5 13:48:35.579: INFO: Got endpoints: latency-svc-97z5s [702.801295ms]
Jul  5 13:48:35.605: INFO: Created: latency-svc-2fn8g
Jul  5 13:48:35.620: INFO: Got endpoints: latency-svc-2fn8g [736.817622ms]
Jul  5 13:48:35.648: INFO: Created: latency-svc-ch5zk
Jul  5 13:48:35.662: INFO: Got endpoints: latency-svc-ch5zk [747.638081ms]
Jul  5 13:48:35.715: INFO: Created: latency-svc-6tg4l
Jul  5 13:48:35.746: INFO: Got endpoints: latency-svc-6tg4l [789.490478ms]
Jul  5 13:48:35.809: INFO: Created: latency-svc-m6z85
Jul  5 13:48:35.852: INFO: Got endpoints: latency-svc-m6z85 [840.290039ms]
Jul  5 13:48:35.882: INFO: Created: latency-svc-79jx9
Jul  5 13:48:35.915: INFO: Got endpoints: latency-svc-79jx9 [861.580489ms]
Jul  5 13:48:35.944: INFO: Created: latency-svc-z6jxk
Jul  5 13:48:36.044: INFO: Got endpoints: latency-svc-z6jxk [947.809907ms]
Jul  5 13:48:36.046: INFO: Created: latency-svc-vdxpc
Jul  5 13:48:36.059: INFO: Got endpoints: latency-svc-vdxpc [916.538687ms]
Jul  5 13:48:36.079: INFO: Created: latency-svc-2xvw9
Jul  5 13:48:36.095: INFO: Got endpoints: latency-svc-2xvw9 [909.232006ms]
Jul  5 13:48:36.128: INFO: Created: latency-svc-wjvvh
Jul  5 13:48:36.137: INFO: Got endpoints: latency-svc-wjvvh [909.009ms]
Jul  5 13:48:36.194: INFO: Created: latency-svc-qtpg6
Jul  5 13:48:36.201: INFO: Got endpoints: latency-svc-qtpg6 [877.384185ms]
Jul  5 13:48:36.232: INFO: Created: latency-svc-79qwv
Jul  5 13:48:36.246: INFO: Got endpoints: latency-svc-79qwv [873.606346ms]
Jul  5 13:48:36.268: INFO: Created: latency-svc-57gtl
Jul  5 13:48:36.288: INFO: Got endpoints: latency-svc-57gtl [858.527761ms]
Jul  5 13:48:36.398: INFO: Created: latency-svc-6zvrh
Jul  5 13:48:36.414: INFO: Got endpoints: latency-svc-6zvrh [944.859327ms]
Jul  5 13:48:36.499: INFO: Created: latency-svc-ktfjf
Jul  5 13:48:36.502: INFO: Got endpoints: latency-svc-ktfjf [989.744356ms]
Jul  5 13:48:36.554: INFO: Created: latency-svc-9j4ng
Jul  5 13:48:36.571: INFO: Got endpoints: latency-svc-9j4ng [991.736343ms]
Jul  5 13:48:36.595: INFO: Created: latency-svc-qs7k5
Jul  5 13:48:36.673: INFO: Got endpoints: latency-svc-qs7k5 [1.052198863s]
Jul  5 13:48:36.674: INFO: Created: latency-svc-s44qt
Jul  5 13:48:36.679: INFO: Got endpoints: latency-svc-s44qt [1.01629009s]
Jul  5 13:48:36.712: INFO: Created: latency-svc-d7fw8
Jul  5 13:48:36.728: INFO: Got endpoints: latency-svc-d7fw8 [981.821404ms]
Jul  5 13:48:36.757: INFO: Created: latency-svc-qnn28
Jul  5 13:48:36.810: INFO: Got endpoints: latency-svc-qnn28 [957.808162ms]
Jul  5 13:48:36.823: INFO: Created: latency-svc-rjpjh
Jul  5 13:48:36.847: INFO: Got endpoints: latency-svc-rjpjh [932.171321ms]
Jul  5 13:48:36.878: INFO: Created: latency-svc-6546l
Jul  5 13:48:36.890: INFO: Got endpoints: latency-svc-6546l [846.658166ms]
Jul  5 13:48:36.955: INFO: Created: latency-svc-vbtxk
Jul  5 13:48:36.958: INFO: Got endpoints: latency-svc-vbtxk [898.784361ms]
Jul  5 13:48:36.982: INFO: Created: latency-svc-47k9n
Jul  5 13:48:36.999: INFO: Got endpoints: latency-svc-47k9n [903.282065ms]
Jul  5 13:48:37.018: INFO: Created: latency-svc-4vjmm
Jul  5 13:48:37.035: INFO: Got endpoints: latency-svc-4vjmm [897.394103ms]
Jul  5 13:48:37.035: INFO: Latencies: [48.754393ms 84.98349ms 133.56627ms 145.176002ms 181.608445ms 226.062612ms 350.188146ms 416.162991ms 510.449559ms 524.648339ms 578.485106ms 651.699362ms 665.421327ms 675.131945ms 693.960933ms 698.396674ms 702.801295ms 711.140805ms 716.794336ms 723.653922ms 724.05917ms 728.483294ms 729.397177ms 735.348056ms 736.817622ms 741.754021ms 747.638081ms 747.85323ms 747.987037ms 748.109986ms 748.291088ms 753.897301ms 762.603543ms 762.817259ms 765.412533ms 766.12036ms 769.591699ms 772.063642ms 774.744818ms 774.936677ms 776.894258ms 776.960889ms 778.372402ms 788.877956ms 789.100633ms 789.490478ms 789.717325ms 789.896593ms 795.150823ms 796.334565ms 802.22328ms 803.475307ms 807.156516ms 807.697619ms 810.663945ms 811.749131ms 815.966429ms 817.517734ms 818.979584ms 819.184338ms 819.516014ms 820.670297ms 825.181414ms 825.600104ms 827.300249ms 833.860558ms 835.210158ms 835.784545ms 837.118244ms 837.163589ms 839.719648ms 840.290039ms 843.244403ms 843.714182ms 843.770644ms 843.97226ms 844.493285ms 846.223877ms 846.262148ms 846.658166ms 855.124219ms 855.337628ms 855.375539ms 858.527761ms 861.412133ms 861.580489ms 863.052702ms 864.097451ms 865.034205ms 865.681343ms 866.086458ms 867.353492ms 870.735332ms 872.596834ms 873.225534ms 873.557817ms 873.606346ms 874.241073ms 875.233621ms 877.384185ms 879.209627ms 879.411688ms 879.558316ms 879.840596ms 885.007756ms 885.036474ms 885.21361ms 885.21383ms 885.841519ms 886.953771ms 889.157913ms 890.624993ms 890.856803ms 890.992385ms 891.919256ms 894.729926ms 895.928678ms 897.247028ms 897.394103ms 898.784361ms 899.226607ms 900.680555ms 901.134176ms 902.714276ms 903.030056ms 903.282065ms 903.881326ms 907.324199ms 909.009ms 909.232006ms 909.241703ms 909.249873ms 909.425898ms 910.260532ms 912.725244ms 915.888153ms 916.538687ms 921.120374ms 921.442181ms 921.70546ms 927.367409ms 932.171321ms 934.181097ms 936.597645ms 943.298425ms 944.859327ms 947.809907ms 954.382782ms 957.808162ms 957.883445ms 958.194712ms 958.263949ms 958.322897ms 965.54471ms 966.92123ms 969.02063ms 976.132213ms 981.683496ms 981.821404ms 982.059314ms 988.359001ms 989.744356ms 990.207721ms 991.736343ms 994.0727ms 999.169032ms 1.004961491s 1.008036162s 1.011941406s 1.01629009s 1.022897243s 1.026199457s 1.028997836s 1.035769723s 1.039949941s 1.044168671s 1.052198863s 1.053485255s 1.05871085s 1.059216879s 1.060379348s 1.065204372s 1.065353741s 1.068932961s 1.083319287s 3.292152756s 3.319838898s 3.321406192s 3.322616809s 3.325375811s 3.355219949s 3.355361309s 3.356667606s 3.369051029s 3.38250184s 3.406383132s 3.418138156s 3.421143184s 3.424677492s 3.442696094s]
Jul  5 13:48:37.035: INFO: 50 %ile: 879.209627ms
Jul  5 13:48:37.035: INFO: 90 %ile: 1.060379348s
Jul  5 13:48:37.036: INFO: 99 %ile: 3.424677492s
Jul  5 13:48:37.036: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:48:37.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-3474" for this suite.
Jul  5 13:49:03.056: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:49:03.141: INFO: namespace svc-latency-3474 deletion completed in 26.099159125s

• [SLOW TEST:43.473 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:49:03.142: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-36745734-eaa2-47a5-a439-b6c2989fd8e7
STEP: Creating configMap with name cm-test-opt-upd-759a2c94-a8b7-4217-90b6-d4e759520c5c
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-36745734-eaa2-47a5-a439-b6c2989fd8e7
STEP: Updating configmap cm-test-opt-upd-759a2c94-a8b7-4217-90b6-d4e759520c5c
STEP: Creating configMap with name cm-test-opt-create-bcf16791-89b9-478b-b506-722a4f4d3235
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:49:13.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1594" for this suite.
Jul  5 13:49:35.328: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:49:35.403: INFO: namespace projected-1594 deletion completed in 22.086618605s

• [SLOW TEST:32.261 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:49:35.404: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:49:39.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-9552" for this suite.
Jul  5 13:49:45.623: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:49:45.702: INFO: namespace emptydir-wrapper-9552 deletion completed in 6.093382434s

• [SLOW TEST:10.298 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:49:45.703: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jul  5 13:49:45.735: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
Jul  5 13:49:47.781: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:49:49.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-1810" for this suite.
Jul  5 13:49:57.392: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:49:57.479: INFO: namespace replication-controller-1810 deletion completed in 8.243797535s

• [SLOW TEST:11.777 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:49:57.480: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-0abcae95-3b83-410f-b03d-2e6f4040dec7
STEP: Creating a pod to test consume configMaps
Jul  5 13:49:57.561: INFO: Waiting up to 5m0s for pod "pod-configmaps-577dd52e-0258-41ca-bffc-e658464549df" in namespace "configmap-2337" to be "success or failure"
Jul  5 13:49:57.580: INFO: Pod "pod-configmaps-577dd52e-0258-41ca-bffc-e658464549df": Phase="Pending", Reason="", readiness=false. Elapsed: 19.115907ms
Jul  5 13:49:59.728: INFO: Pod "pod-configmaps-577dd52e-0258-41ca-bffc-e658464549df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.166885248s
Jul  5 13:50:01.732: INFO: Pod "pod-configmaps-577dd52e-0258-41ca-bffc-e658464549df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.171156394s
STEP: Saw pod success
Jul  5 13:50:01.732: INFO: Pod "pod-configmaps-577dd52e-0258-41ca-bffc-e658464549df" satisfied condition "success or failure"
Jul  5 13:50:01.734: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-577dd52e-0258-41ca-bffc-e658464549df container configmap-volume-test: 
STEP: delete the pod
Jul  5 13:50:02.778: INFO: Waiting for pod pod-configmaps-577dd52e-0258-41ca-bffc-e658464549df to disappear
Jul  5 13:50:02.943: INFO: Pod pod-configmaps-577dd52e-0258-41ca-bffc-e658464549df no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:50:02.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2337" for this suite.
Jul  5 13:50:08.957: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:50:09.028: INFO: namespace configmap-2337 deletion completed in 6.080265467s

• [SLOW TEST:11.548 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:50:09.028: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-a7176ddb-4c6a-4ac4-bebd-72685e7531f4
STEP: Creating a pod to test consume secrets
Jul  5 13:50:09.178: INFO: Waiting up to 5m0s for pod "pod-secrets-2d52d13e-b0d0-4f0c-9e28-699d07a00213" in namespace "secrets-4380" to be "success or failure"
Jul  5 13:50:09.195: INFO: Pod "pod-secrets-2d52d13e-b0d0-4f0c-9e28-699d07a00213": Phase="Pending", Reason="", readiness=false. Elapsed: 17.618167ms
Jul  5 13:50:11.199: INFO: Pod "pod-secrets-2d52d13e-b0d0-4f0c-9e28-699d07a00213": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021541113s
Jul  5 13:50:13.231: INFO: Pod "pod-secrets-2d52d13e-b0d0-4f0c-9e28-699d07a00213": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.053771721s
STEP: Saw pod success
Jul  5 13:50:13.231: INFO: Pod "pod-secrets-2d52d13e-b0d0-4f0c-9e28-699d07a00213" satisfied condition "success or failure"
Jul  5 13:50:13.234: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-2d52d13e-b0d0-4f0c-9e28-699d07a00213 container secret-volume-test: 
STEP: delete the pod
Jul  5 13:50:13.366: INFO: Waiting for pod pod-secrets-2d52d13e-b0d0-4f0c-9e28-699d07a00213 to disappear
Jul  5 13:50:13.411: INFO: Pod pod-secrets-2d52d13e-b0d0-4f0c-9e28-699d07a00213 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:50:13.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4380" for this suite.
Jul  5 13:50:19.482: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:50:19.580: INFO: namespace secrets-4380 deletion completed in 6.165962538s

• [SLOW TEST:10.553 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:50:19.581: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Jul  5 13:50:20.248: INFO: Pod name wrapped-volume-race-29b41f23-f3c1-490f-bc22-be7b0fc178ec: Found 0 pods out of 5
Jul  5 13:50:25.256: INFO: Pod name wrapped-volume-race-29b41f23-f3c1-490f-bc22-be7b0fc178ec: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-29b41f23-f3c1-490f-bc22-be7b0fc178ec in namespace emptydir-wrapper-9026, will wait for the garbage collector to delete the pods
Jul  5 13:50:39.360: INFO: Deleting ReplicationController wrapped-volume-race-29b41f23-f3c1-490f-bc22-be7b0fc178ec took: 21.64065ms
Jul  5 13:50:39.660: INFO: Terminating ReplicationController wrapped-volume-race-29b41f23-f3c1-490f-bc22-be7b0fc178ec pods took: 300.291207ms
STEP: Creating RC which spawns configmap-volume pods
Jul  5 13:51:27.171: INFO: Pod name wrapped-volume-race-d3192412-0265-46da-b54c-c4a7cbc9276d: Found 0 pods out of 5
Jul  5 13:51:32.179: INFO: Pod name wrapped-volume-race-d3192412-0265-46da-b54c-c4a7cbc9276d: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-d3192412-0265-46da-b54c-c4a7cbc9276d in namespace emptydir-wrapper-9026, will wait for the garbage collector to delete the pods
Jul  5 13:51:46.435: INFO: Deleting ReplicationController wrapped-volume-race-d3192412-0265-46da-b54c-c4a7cbc9276d took: 7.886076ms
Jul  5 13:51:46.735: INFO: Terminating ReplicationController wrapped-volume-race-d3192412-0265-46da-b54c-c4a7cbc9276d pods took: 300.268882ms
STEP: Creating RC which spawns configmap-volume pods
Jul  5 13:52:26.280: INFO: Pod name wrapped-volume-race-3f40b5d6-4696-4eb7-9409-63abed224c4c: Found 0 pods out of 5
Jul  5 13:52:31.289: INFO: Pod name wrapped-volume-race-3f40b5d6-4696-4eb7-9409-63abed224c4c: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-3f40b5d6-4696-4eb7-9409-63abed224c4c in namespace emptydir-wrapper-9026, will wait for the garbage collector to delete the pods
Jul  5 13:52:47.377: INFO: Deleting ReplicationController wrapped-volume-race-3f40b5d6-4696-4eb7-9409-63abed224c4c took: 9.546378ms
Jul  5 13:52:47.677: INFO: Terminating ReplicationController wrapped-volume-race-3f40b5d6-4696-4eb7-9409-63abed224c4c pods took: 300.272229ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:53:27.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-9026" for this suite.
Jul  5 13:53:35.619: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:53:35.690: INFO: namespace emptydir-wrapper-9026 deletion completed in 8.080958885s

• [SLOW TEST:196.109 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:53:35.691: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-5814
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-5814
STEP: Creating statefulset with conflicting port in namespace statefulset-5814
STEP: Waiting until pod test-pod will start running in namespace statefulset-5814
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-5814
Jul  5 13:53:41.868: INFO: Observed stateful pod in namespace: statefulset-5814, name: ss-0, uid: a5a81fa8-3db9-40ec-9859-5d39baa2c348, status phase: Pending. Waiting for statefulset controller to delete.
Jul  5 13:53:42.392: INFO: Observed stateful pod in namespace: statefulset-5814, name: ss-0, uid: a5a81fa8-3db9-40ec-9859-5d39baa2c348, status phase: Failed. Waiting for statefulset controller to delete.
Jul  5 13:53:42.415: INFO: Observed stateful pod in namespace: statefulset-5814, name: ss-0, uid: a5a81fa8-3db9-40ec-9859-5d39baa2c348, status phase: Failed. Waiting for statefulset controller to delete.
Jul  5 13:53:42.452: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-5814
STEP: Removing pod with conflicting port in namespace statefulset-5814
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-5814 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Jul  5 13:53:48.591: INFO: Deleting all statefulset in ns statefulset-5814
Jul  5 13:53:48.594: INFO: Scaling statefulset ss to 0
Jul  5 13:53:58.629: INFO: Waiting for statefulset status.replicas updated to 0
Jul  5 13:53:58.632: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:53:58.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-5814" for this suite.
Jul  5 13:54:04.684: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:54:04.786: INFO: namespace statefulset-5814 deletion completed in 6.112866492s

• [SLOW TEST:29.096 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:54:04.787: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8212.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8212.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8212.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8212.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8212.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-8212.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8212.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-8212.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8212.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-8212.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8212.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-8212.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8212.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 43.20.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.20.43_udp@PTR;check="$$(dig +tcp +noall +answer +search 43.20.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.20.43_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8212.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8212.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8212.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8212.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8212.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-8212.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8212.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-8212.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8212.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-8212.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8212.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-8212.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8212.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 43.20.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.20.43_udp@PTR;check="$$(dig +tcp +noall +answer +search 43.20.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.20.43_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jul  5 13:54:10.971: INFO: Unable to read wheezy_udp@dns-test-service.dns-8212.svc.cluster.local from pod dns-8212/dns-test-6c4c1b73-876c-4c62-a92b-54354e294551: the server could not find the requested resource (get pods dns-test-6c4c1b73-876c-4c62-a92b-54354e294551)
Jul  5 13:54:10.974: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8212.svc.cluster.local from pod dns-8212/dns-test-6c4c1b73-876c-4c62-a92b-54354e294551: the server could not find the requested resource (get pods dns-test-6c4c1b73-876c-4c62-a92b-54354e294551)
Jul  5 13:54:10.977: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8212.svc.cluster.local from pod dns-8212/dns-test-6c4c1b73-876c-4c62-a92b-54354e294551: the server could not find the requested resource (get pods dns-test-6c4c1b73-876c-4c62-a92b-54354e294551)
Jul  5 13:54:10.980: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8212.svc.cluster.local from pod dns-8212/dns-test-6c4c1b73-876c-4c62-a92b-54354e294551: the server could not find the requested resource (get pods dns-test-6c4c1b73-876c-4c62-a92b-54354e294551)
Jul  5 13:54:11.049: INFO: Unable to read jessie_udp@dns-test-service.dns-8212.svc.cluster.local from pod dns-8212/dns-test-6c4c1b73-876c-4c62-a92b-54354e294551: the server could not find the requested resource (get pods dns-test-6c4c1b73-876c-4c62-a92b-54354e294551)
Jul  5 13:54:11.052: INFO: Unable to read jessie_tcp@dns-test-service.dns-8212.svc.cluster.local from pod dns-8212/dns-test-6c4c1b73-876c-4c62-a92b-54354e294551: the server could not find the requested resource (get pods dns-test-6c4c1b73-876c-4c62-a92b-54354e294551)
Jul  5 13:54:11.055: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8212.svc.cluster.local from pod dns-8212/dns-test-6c4c1b73-876c-4c62-a92b-54354e294551: the server could not find the requested resource (get pods dns-test-6c4c1b73-876c-4c62-a92b-54354e294551)
Jul  5 13:54:11.058: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8212.svc.cluster.local from pod dns-8212/dns-test-6c4c1b73-876c-4c62-a92b-54354e294551: the server could not find the requested resource (get pods dns-test-6c4c1b73-876c-4c62-a92b-54354e294551)
Jul  5 13:54:11.078: INFO: Lookups using dns-8212/dns-test-6c4c1b73-876c-4c62-a92b-54354e294551 failed for: [wheezy_udp@dns-test-service.dns-8212.svc.cluster.local wheezy_tcp@dns-test-service.dns-8212.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8212.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8212.svc.cluster.local jessie_udp@dns-test-service.dns-8212.svc.cluster.local jessie_tcp@dns-test-service.dns-8212.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8212.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8212.svc.cluster.local]

Jul  5 13:54:16.083: INFO: Unable to read wheezy_udp@dns-test-service.dns-8212.svc.cluster.local from pod dns-8212/dns-test-6c4c1b73-876c-4c62-a92b-54354e294551: the server could not find the requested resource (get pods dns-test-6c4c1b73-876c-4c62-a92b-54354e294551)
Jul  5 13:54:16.087: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8212.svc.cluster.local from pod dns-8212/dns-test-6c4c1b73-876c-4c62-a92b-54354e294551: the server could not find the requested resource (get pods dns-test-6c4c1b73-876c-4c62-a92b-54354e294551)
Jul  5 13:54:16.091: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8212.svc.cluster.local from pod dns-8212/dns-test-6c4c1b73-876c-4c62-a92b-54354e294551: the server could not find the requested resource (get pods dns-test-6c4c1b73-876c-4c62-a92b-54354e294551)
Jul  5 13:54:16.095: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8212.svc.cluster.local from pod dns-8212/dns-test-6c4c1b73-876c-4c62-a92b-54354e294551: the server could not find the requested resource (get pods dns-test-6c4c1b73-876c-4c62-a92b-54354e294551)
Jul  5 13:54:16.139: INFO: Unable to read jessie_udp@dns-test-service.dns-8212.svc.cluster.local from pod dns-8212/dns-test-6c4c1b73-876c-4c62-a92b-54354e294551: the server could not find the requested resource (get pods dns-test-6c4c1b73-876c-4c62-a92b-54354e294551)
Jul  5 13:54:16.140: INFO: Unable to read jessie_tcp@dns-test-service.dns-8212.svc.cluster.local from pod dns-8212/dns-test-6c4c1b73-876c-4c62-a92b-54354e294551: the server could not find the requested resource (get pods dns-test-6c4c1b73-876c-4c62-a92b-54354e294551)
Jul  5 13:54:16.142: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8212.svc.cluster.local from pod dns-8212/dns-test-6c4c1b73-876c-4c62-a92b-54354e294551: the server could not find the requested resource (get pods dns-test-6c4c1b73-876c-4c62-a92b-54354e294551)
Jul  5 13:54:16.144: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8212.svc.cluster.local from pod dns-8212/dns-test-6c4c1b73-876c-4c62-a92b-54354e294551: the server could not find the requested resource (get pods dns-test-6c4c1b73-876c-4c62-a92b-54354e294551)
Jul  5 13:54:16.157: INFO: Lookups using dns-8212/dns-test-6c4c1b73-876c-4c62-a92b-54354e294551 failed for: [wheezy_udp@dns-test-service.dns-8212.svc.cluster.local wheezy_tcp@dns-test-service.dns-8212.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8212.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8212.svc.cluster.local jessie_udp@dns-test-service.dns-8212.svc.cluster.local jessie_tcp@dns-test-service.dns-8212.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8212.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8212.svc.cluster.local]

Jul  5 13:54:21.083: INFO: Unable to read wheezy_udp@dns-test-service.dns-8212.svc.cluster.local from pod dns-8212/dns-test-6c4c1b73-876c-4c62-a92b-54354e294551: the server could not find the requested resource (get pods dns-test-6c4c1b73-876c-4c62-a92b-54354e294551)
Jul  5 13:54:21.087: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8212.svc.cluster.local from pod dns-8212/dns-test-6c4c1b73-876c-4c62-a92b-54354e294551: the server could not find the requested resource (get pods dns-test-6c4c1b73-876c-4c62-a92b-54354e294551)
Jul  5 13:54:21.095: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8212.svc.cluster.local from pod dns-8212/dns-test-6c4c1b73-876c-4c62-a92b-54354e294551: the server could not find the requested resource (get pods dns-test-6c4c1b73-876c-4c62-a92b-54354e294551)
Jul  5 13:54:21.098: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8212.svc.cluster.local from pod dns-8212/dns-test-6c4c1b73-876c-4c62-a92b-54354e294551: the server could not find the requested resource (get pods dns-test-6c4c1b73-876c-4c62-a92b-54354e294551)
Jul  5 13:54:21.116: INFO: Unable to read jessie_udp@dns-test-service.dns-8212.svc.cluster.local from pod dns-8212/dns-test-6c4c1b73-876c-4c62-a92b-54354e294551: the server could not find the requested resource (get pods dns-test-6c4c1b73-876c-4c62-a92b-54354e294551)
Jul  5 13:54:21.118: INFO: Unable to read jessie_tcp@dns-test-service.dns-8212.svc.cluster.local from pod dns-8212/dns-test-6c4c1b73-876c-4c62-a92b-54354e294551: the server could not find the requested resource (get pods dns-test-6c4c1b73-876c-4c62-a92b-54354e294551)
Jul  5 13:54:21.121: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8212.svc.cluster.local from pod dns-8212/dns-test-6c4c1b73-876c-4c62-a92b-54354e294551: the server could not find the requested resource (get pods dns-test-6c4c1b73-876c-4c62-a92b-54354e294551)
Jul  5 13:54:21.124: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8212.svc.cluster.local from pod dns-8212/dns-test-6c4c1b73-876c-4c62-a92b-54354e294551: the server could not find the requested resource (get pods dns-test-6c4c1b73-876c-4c62-a92b-54354e294551)
Jul  5 13:54:21.142: INFO: Lookups using dns-8212/dns-test-6c4c1b73-876c-4c62-a92b-54354e294551 failed for: [wheezy_udp@dns-test-service.dns-8212.svc.cluster.local wheezy_tcp@dns-test-service.dns-8212.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8212.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8212.svc.cluster.local jessie_udp@dns-test-service.dns-8212.svc.cluster.local jessie_tcp@dns-test-service.dns-8212.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8212.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8212.svc.cluster.local]

Jul  5 13:54:26.083: INFO: Unable to read wheezy_udp@dns-test-service.dns-8212.svc.cluster.local from pod dns-8212/dns-test-6c4c1b73-876c-4c62-a92b-54354e294551: the server could not find the requested resource (get pods dns-test-6c4c1b73-876c-4c62-a92b-54354e294551)
Jul  5 13:54:26.086: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8212.svc.cluster.local from pod dns-8212/dns-test-6c4c1b73-876c-4c62-a92b-54354e294551: the server could not find the requested resource (get pods dns-test-6c4c1b73-876c-4c62-a92b-54354e294551)
Jul  5 13:54:26.089: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8212.svc.cluster.local from pod dns-8212/dns-test-6c4c1b73-876c-4c62-a92b-54354e294551: the server could not find the requested resource (get pods dns-test-6c4c1b73-876c-4c62-a92b-54354e294551)
Jul  5 13:54:26.092: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8212.svc.cluster.local from pod dns-8212/dns-test-6c4c1b73-876c-4c62-a92b-54354e294551: the server could not find the requested resource (get pods dns-test-6c4c1b73-876c-4c62-a92b-54354e294551)
Jul  5 13:54:26.112: INFO: Unable to read jessie_udp@dns-test-service.dns-8212.svc.cluster.local from pod dns-8212/dns-test-6c4c1b73-876c-4c62-a92b-54354e294551: the server could not find the requested resource (get pods dns-test-6c4c1b73-876c-4c62-a92b-54354e294551)
Jul  5 13:54:26.115: INFO: Unable to read jessie_tcp@dns-test-service.dns-8212.svc.cluster.local from pod dns-8212/dns-test-6c4c1b73-876c-4c62-a92b-54354e294551: the server could not find the requested resource (get pods dns-test-6c4c1b73-876c-4c62-a92b-54354e294551)
Jul  5 13:54:26.118: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8212.svc.cluster.local from pod dns-8212/dns-test-6c4c1b73-876c-4c62-a92b-54354e294551: the server could not find the requested resource (get pods dns-test-6c4c1b73-876c-4c62-a92b-54354e294551)
Jul  5 13:54:26.120: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8212.svc.cluster.local from pod dns-8212/dns-test-6c4c1b73-876c-4c62-a92b-54354e294551: the server could not find the requested resource (get pods dns-test-6c4c1b73-876c-4c62-a92b-54354e294551)
Jul  5 13:54:26.135: INFO: Lookups using dns-8212/dns-test-6c4c1b73-876c-4c62-a92b-54354e294551 failed for: [wheezy_udp@dns-test-service.dns-8212.svc.cluster.local wheezy_tcp@dns-test-service.dns-8212.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8212.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8212.svc.cluster.local jessie_udp@dns-test-service.dns-8212.svc.cluster.local jessie_tcp@dns-test-service.dns-8212.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8212.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8212.svc.cluster.local]

Jul  5 13:54:31.082: INFO: Unable to read wheezy_udp@dns-test-service.dns-8212.svc.cluster.local from pod dns-8212/dns-test-6c4c1b73-876c-4c62-a92b-54354e294551: the server could not find the requested resource (get pods dns-test-6c4c1b73-876c-4c62-a92b-54354e294551)
Jul  5 13:54:31.086: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8212.svc.cluster.local from pod dns-8212/dns-test-6c4c1b73-876c-4c62-a92b-54354e294551: the server could not find the requested resource (get pods dns-test-6c4c1b73-876c-4c62-a92b-54354e294551)
Jul  5 13:54:31.088: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8212.svc.cluster.local from pod dns-8212/dns-test-6c4c1b73-876c-4c62-a92b-54354e294551: the server could not find the requested resource (get pods dns-test-6c4c1b73-876c-4c62-a92b-54354e294551)
Jul  5 13:54:31.092: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8212.svc.cluster.local from pod dns-8212/dns-test-6c4c1b73-876c-4c62-a92b-54354e294551: the server could not find the requested resource (get pods dns-test-6c4c1b73-876c-4c62-a92b-54354e294551)
Jul  5 13:54:31.147: INFO: Unable to read jessie_udp@dns-test-service.dns-8212.svc.cluster.local from pod dns-8212/dns-test-6c4c1b73-876c-4c62-a92b-54354e294551: the server could not find the requested resource (get pods dns-test-6c4c1b73-876c-4c62-a92b-54354e294551)
Jul  5 13:54:31.150: INFO: Unable to read jessie_tcp@dns-test-service.dns-8212.svc.cluster.local from pod dns-8212/dns-test-6c4c1b73-876c-4c62-a92b-54354e294551: the server could not find the requested resource (get pods dns-test-6c4c1b73-876c-4c62-a92b-54354e294551)
Jul  5 13:54:31.153: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8212.svc.cluster.local from pod dns-8212/dns-test-6c4c1b73-876c-4c62-a92b-54354e294551: the server could not find the requested resource (get pods dns-test-6c4c1b73-876c-4c62-a92b-54354e294551)
Jul  5 13:54:31.156: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8212.svc.cluster.local from pod dns-8212/dns-test-6c4c1b73-876c-4c62-a92b-54354e294551: the server could not find the requested resource (get pods dns-test-6c4c1b73-876c-4c62-a92b-54354e294551)
Jul  5 13:54:31.174: INFO: Lookups using dns-8212/dns-test-6c4c1b73-876c-4c62-a92b-54354e294551 failed for: [wheezy_udp@dns-test-service.dns-8212.svc.cluster.local wheezy_tcp@dns-test-service.dns-8212.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8212.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8212.svc.cluster.local jessie_udp@dns-test-service.dns-8212.svc.cluster.local jessie_tcp@dns-test-service.dns-8212.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8212.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8212.svc.cluster.local]

Jul  5 13:54:36.083: INFO: Unable to read wheezy_udp@dns-test-service.dns-8212.svc.cluster.local from pod dns-8212/dns-test-6c4c1b73-876c-4c62-a92b-54354e294551: the server could not find the requested resource (get pods dns-test-6c4c1b73-876c-4c62-a92b-54354e294551)
Jul  5 13:54:36.087: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8212.svc.cluster.local from pod dns-8212/dns-test-6c4c1b73-876c-4c62-a92b-54354e294551: the server could not find the requested resource (get pods dns-test-6c4c1b73-876c-4c62-a92b-54354e294551)
Jul  5 13:54:36.091: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8212.svc.cluster.local from pod dns-8212/dns-test-6c4c1b73-876c-4c62-a92b-54354e294551: the server could not find the requested resource (get pods dns-test-6c4c1b73-876c-4c62-a92b-54354e294551)
Jul  5 13:54:36.094: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8212.svc.cluster.local from pod dns-8212/dns-test-6c4c1b73-876c-4c62-a92b-54354e294551: the server could not find the requested resource (get pods dns-test-6c4c1b73-876c-4c62-a92b-54354e294551)
Jul  5 13:54:36.116: INFO: Unable to read jessie_udp@dns-test-service.dns-8212.svc.cluster.local from pod dns-8212/dns-test-6c4c1b73-876c-4c62-a92b-54354e294551: the server could not find the requested resource (get pods dns-test-6c4c1b73-876c-4c62-a92b-54354e294551)
Jul  5 13:54:36.119: INFO: Unable to read jessie_tcp@dns-test-service.dns-8212.svc.cluster.local from pod dns-8212/dns-test-6c4c1b73-876c-4c62-a92b-54354e294551: the server could not find the requested resource (get pods dns-test-6c4c1b73-876c-4c62-a92b-54354e294551)
Jul  5 13:54:36.122: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8212.svc.cluster.local from pod dns-8212/dns-test-6c4c1b73-876c-4c62-a92b-54354e294551: the server could not find the requested resource (get pods dns-test-6c4c1b73-876c-4c62-a92b-54354e294551)
Jul  5 13:54:36.126: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8212.svc.cluster.local from pod dns-8212/dns-test-6c4c1b73-876c-4c62-a92b-54354e294551: the server could not find the requested resource (get pods dns-test-6c4c1b73-876c-4c62-a92b-54354e294551)
Jul  5 13:54:36.146: INFO: Lookups using dns-8212/dns-test-6c4c1b73-876c-4c62-a92b-54354e294551 failed for: [wheezy_udp@dns-test-service.dns-8212.svc.cluster.local wheezy_tcp@dns-test-service.dns-8212.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8212.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8212.svc.cluster.local jessie_udp@dns-test-service.dns-8212.svc.cluster.local jessie_tcp@dns-test-service.dns-8212.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8212.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8212.svc.cluster.local]

Jul  5 13:54:41.195: INFO: DNS probes using dns-8212/dns-test-6c4c1b73-876c-4c62-a92b-54354e294551 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:54:41.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-8212" for this suite.
Jul  5 13:54:47.892: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:54:47.985: INFO: namespace dns-8212 deletion completed in 6.128580472s

• [SLOW TEST:43.198 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:54:47.985: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-18dd5fb0-0442-48a4-96bf-b979e0dac90c
STEP: Creating a pod to test consume secrets
Jul  5 13:54:48.074: INFO: Waiting up to 5m0s for pod "pod-secrets-ed8d9e45-9b5b-420c-906f-6321089e2288" in namespace "secrets-1426" to be "success or failure"
Jul  5 13:54:48.090: INFO: Pod "pod-secrets-ed8d9e45-9b5b-420c-906f-6321089e2288": Phase="Pending", Reason="", readiness=false. Elapsed: 16.572569ms
Jul  5 13:54:50.095: INFO: Pod "pod-secrets-ed8d9e45-9b5b-420c-906f-6321089e2288": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020916449s
Jul  5 13:54:52.099: INFO: Pod "pod-secrets-ed8d9e45-9b5b-420c-906f-6321089e2288": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02521029s
STEP: Saw pod success
Jul  5 13:54:52.099: INFO: Pod "pod-secrets-ed8d9e45-9b5b-420c-906f-6321089e2288" satisfied condition "success or failure"
Jul  5 13:54:52.102: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-ed8d9e45-9b5b-420c-906f-6321089e2288 container secret-volume-test: 
STEP: delete the pod
Jul  5 13:54:52.139: INFO: Waiting for pod pod-secrets-ed8d9e45-9b5b-420c-906f-6321089e2288 to disappear
Jul  5 13:54:52.150: INFO: Pod pod-secrets-ed8d9e45-9b5b-420c-906f-6321089e2288 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:54:52.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1426" for this suite.
Jul  5 13:54:58.168: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:54:58.251: INFO: namespace secrets-1426 deletion completed in 6.097847069s

• [SLOW TEST:10.266 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:54:58.251: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jul  5 13:54:58.363: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 13:54:58.389: INFO: Number of nodes with available pods: 0
Jul  5 13:54:58.389: INFO: Node iruya-worker is running more than one daemon pod
Jul  5 13:54:59.395: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 13:54:59.398: INFO: Number of nodes with available pods: 0
Jul  5 13:54:59.398: INFO: Node iruya-worker is running more than one daemon pod
Jul  5 13:55:00.394: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 13:55:00.398: INFO: Number of nodes with available pods: 0
Jul  5 13:55:00.398: INFO: Node iruya-worker is running more than one daemon pod
Jul  5 13:55:01.541: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 13:55:01.545: INFO: Number of nodes with available pods: 0
Jul  5 13:55:01.545: INFO: Node iruya-worker is running more than one daemon pod
Jul  5 13:55:02.396: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 13:55:02.414: INFO: Number of nodes with available pods: 2
Jul  5 13:55:02.414: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Jul  5 13:55:02.444: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 13:55:02.491: INFO: Number of nodes with available pods: 2
Jul  5 13:55:02.491: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4480, will wait for the garbage collector to delete the pods
Jul  5 13:55:03.730: INFO: Deleting DaemonSet.extensions daemon-set took: 20.648348ms
Jul  5 13:55:04.130: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.258547ms
Jul  5 13:55:16.034: INFO: Number of nodes with available pods: 0
Jul  5 13:55:16.034: INFO: Number of running nodes: 0, number of available pods: 0
Jul  5 13:55:16.037: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4480/daemonsets","resourceVersion":"241902"},"items":null}

Jul  5 13:55:16.040: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4480/pods","resourceVersion":"241902"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:55:16.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-4480" for this suite.
Jul  5 13:55:22.073: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:55:22.143: INFO: namespace daemonsets-4480 deletion completed in 6.08893582s

• [SLOW TEST:23.892 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:55:22.143: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jul  5 13:55:22.300: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"078d26e3-706e-4f4a-b90a-c09f886d002b", Controller:(*bool)(0xc0030c9c02), BlockOwnerDeletion:(*bool)(0xc0030c9c03)}}
Jul  5 13:55:22.366: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"c14f4f36-fe63-4f63-926a-90c343d1dced", Controller:(*bool)(0xc002d5a8c2), BlockOwnerDeletion:(*bool)(0xc002d5a8c3)}}
Jul  5 13:55:22.414: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"af7d001b-37e4-40dd-934b-5f932c2eaef4", Controller:(*bool)(0xc0030c9db2), BlockOwnerDeletion:(*bool)(0xc0030c9db3)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:55:27.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-7189" for this suite.
Jul  5 13:55:33.482: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:55:33.559: INFO: namespace gc-7189 deletion completed in 6.088945713s

• [SLOW TEST:11.416 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:55:33.561: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-7/configmap-test-c297216b-0069-4827-881a-63a279a8879a
STEP: Creating a pod to test consume configMaps
Jul  5 13:55:33.961: INFO: Waiting up to 5m0s for pod "pod-configmaps-db23bbed-822c-431a-945a-2d6e4fec24d2" in namespace "configmap-7" to be "success or failure"
Jul  5 13:55:34.000: INFO: Pod "pod-configmaps-db23bbed-822c-431a-945a-2d6e4fec24d2": Phase="Pending", Reason="", readiness=false. Elapsed: 39.784021ms
Jul  5 13:55:36.143: INFO: Pod "pod-configmaps-db23bbed-822c-431a-945a-2d6e4fec24d2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.18227652s
Jul  5 13:55:38.147: INFO: Pod "pod-configmaps-db23bbed-822c-431a-945a-2d6e4fec24d2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.186889297s
STEP: Saw pod success
Jul  5 13:55:38.148: INFO: Pod "pod-configmaps-db23bbed-822c-431a-945a-2d6e4fec24d2" satisfied condition "success or failure"
Jul  5 13:55:38.151: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-db23bbed-822c-431a-945a-2d6e4fec24d2 container env-test: 
STEP: delete the pod
Jul  5 13:55:38.170: INFO: Waiting for pod pod-configmaps-db23bbed-822c-431a-945a-2d6e4fec24d2 to disappear
Jul  5 13:55:38.174: INFO: Pod pod-configmaps-db23bbed-822c-431a-945a-2d6e4fec24d2 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:55:38.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7" for this suite.
Jul  5 13:55:44.209: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:55:44.283: INFO: namespace configmap-7 deletion completed in 6.10555316s

• [SLOW TEST:10.722 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:55:44.283: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name projected-secret-test-ad04e6f4-2be5-4bf4-abb1-cecf3bafc691
STEP: Creating a pod to test consume secrets
Jul  5 13:55:44.369: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2418ace8-1ffe-431d-abff-41cf90a296c8" in namespace "projected-8713" to be "success or failure"
Jul  5 13:55:44.373: INFO: Pod "pod-projected-secrets-2418ace8-1ffe-431d-abff-41cf90a296c8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.214283ms
Jul  5 13:55:46.377: INFO: Pod "pod-projected-secrets-2418ace8-1ffe-431d-abff-41cf90a296c8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007206676s
Jul  5 13:55:48.380: INFO: Pod "pod-projected-secrets-2418ace8-1ffe-431d-abff-41cf90a296c8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010578156s
STEP: Saw pod success
Jul  5 13:55:48.380: INFO: Pod "pod-projected-secrets-2418ace8-1ffe-431d-abff-41cf90a296c8" satisfied condition "success or failure"
Jul  5 13:55:48.383: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-2418ace8-1ffe-431d-abff-41cf90a296c8 container secret-volume-test: 
STEP: delete the pod
Jul  5 13:55:48.455: INFO: Waiting for pod pod-projected-secrets-2418ace8-1ffe-431d-abff-41cf90a296c8 to disappear
Jul  5 13:55:48.480: INFO: Pod pod-projected-secrets-2418ace8-1ffe-431d-abff-41cf90a296c8 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:55:48.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8713" for this suite.
Jul  5 13:55:54.495: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:55:54.562: INFO: namespace projected-8713 deletion completed in 6.077115256s

• [SLOW TEST:10.278 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:55:54.563: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jul  5 13:55:54.619: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9331'
Jul  5 13:55:57.444: INFO: stderr: ""
Jul  5 13:55:57.444: INFO: stdout: "replicationcontroller/redis-master created\n"
Jul  5 13:55:57.444: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9331'
Jul  5 13:55:57.764: INFO: stderr: ""
Jul  5 13:55:57.764: INFO: stdout: "service/redis-master created\n"
STEP: Waiting for Redis master to start.
Jul  5 13:55:58.768: INFO: Selector matched 1 pods for map[app:redis]
Jul  5 13:55:58.768: INFO: Found 0 / 1
Jul  5 13:55:59.769: INFO: Selector matched 1 pods for map[app:redis]
Jul  5 13:55:59.769: INFO: Found 0 / 1
Jul  5 13:56:00.768: INFO: Selector matched 1 pods for map[app:redis]
Jul  5 13:56:00.768: INFO: Found 0 / 1
Jul  5 13:56:01.769: INFO: Selector matched 1 pods for map[app:redis]
Jul  5 13:56:01.769: INFO: Found 1 / 1
Jul  5 13:56:01.769: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jul  5 13:56:01.773: INFO: Selector matched 1 pods for map[app:redis]
Jul  5 13:56:01.773: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jul  5 13:56:01.773: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-6mmfp --namespace=kubectl-9331'
Jul  5 13:56:01.889: INFO: stderr: ""
Jul  5 13:56:01.889: INFO: stdout: "Name:           redis-master-6mmfp\nNamespace:      kubectl-9331\nPriority:       0\nNode:           iruya-worker2/172.17.0.7\nStart Time:     Sun, 05 Jul 2020 13:55:57 +0000\nLabels:         app=redis\n                role=master\nAnnotations:    \nStatus:         Running\nIP:             10.244.1.135\nControlled By:  ReplicationController/redis-master\nContainers:\n  redis-master:\n    Container ID:   containerd://829c73d4a2920048862bb1e97ffa1c0d3d9749ffe5203100de3171988bfaf8b1\n    Image:          gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Image ID:       gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Sun, 05 Jul 2020 13:56:00 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-595j8 (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-595j8:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-595j8\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From                    Message\n  ----    ------     ----  ----                    -------\n  Normal  Scheduled  4s    default-scheduler       Successfully assigned kubectl-9331/redis-master-6mmfp to iruya-worker2\n  Normal  Pulled     3s    kubelet, iruya-worker2  Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n  Normal  Created    1s    kubelet, iruya-worker2  Created container redis-master\n  Normal  Started    1s    kubelet, iruya-worker2  Started container redis-master\n"
Jul  5 13:56:01.889: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-9331'
Jul  5 13:56:02.011: INFO: stderr: ""
Jul  5 13:56:02.011: INFO: stdout: "Name:         redis-master\nNamespace:    kubectl-9331\nSelector:     app=redis,role=master\nLabels:       app=redis\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=redis\n           role=master\n  Containers:\n   redis-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  5s    replication-controller  Created pod: redis-master-6mmfp\n"
Jul  5 13:56:02.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-9331'
Jul  5 13:56:02.127: INFO: stderr: ""
Jul  5 13:56:02.127: INFO: stdout: "Name:              redis-master\nNamespace:         kubectl-9331\nLabels:            app=redis\n                   role=master\nAnnotations:       \nSelector:          app=redis,role=master\nType:              ClusterIP\nIP:                10.106.13.151\nPort:                6379/TCP\nTargetPort:        redis-server/TCP\nEndpoints:         10.244.1.135:6379\nSession Affinity:  None\nEvents:            \n"
Jul  5 13:56:02.131: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-control-plane'
Jul  5 13:56:02.249: INFO: stderr: ""
Jul  5 13:56:02.249: INFO: stdout: "Name:               iruya-control-plane\nRoles:              master\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=iruya-control-plane\n                    kubernetes.io/os=linux\n                    node-role.kubernetes.io/master=\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sat, 04 Jul 2020 09:21:09 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\nUnschedulable:      false\nConditions:\n  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----             ------  -----------------                 ------------------                ------                       -------\n  MemoryPressure   False   Sun, 05 Jul 2020 13:55:44 +0000   Sat, 04 Jul 2020 09:21:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure     False   Sun, 05 Jul 2020 13:55:44 +0000   Sat, 04 Jul 2020 09:21:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure      False   Sun, 05 Jul 2020 13:55:44 +0000   Sat, 04 Jul 2020 09:21:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready            True    Sun, 05 Jul 2020 13:55:44 +0000   Sat, 04 Jul 2020 09:22:00 +0000   KubeletReady                 kubelet is posting ready status\nAddresses:\n  InternalIP:  172.17.0.5\n  Hostname:    iruya-control-plane\nCapacity:\n cpu:                16\n ephemeral-storage:  2303189964Ki\n hugepages-1Gi:      0\n hugepages-2Mi:      0\n memory:             131759892Ki\n pods:               110\nAllocatable:\n cpu:                16\n ephemeral-storage:  2303189964Ki\n hugepages-1Gi:      0\n hugepages-2Mi:      0\n memory:             131759892Ki\n pods:               110\nSystem Info:\n Machine ID:                 fde507f5a52540c7a34f064bb6093546\n System UUID:                44367d21-8f36-4238-8532-9bf2cc81f60d\n Boot ID:                    ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version:             4.15.0-88-generic\n OS Image:                   Ubuntu 19.10\n Operating System:           linux\n Architecture:               amd64\n Container Runtime Version:  containerd://1.3.3-14-g449e9269\n Kubelet Version:            v1.15.11\n Kube-Proxy Version:         v1.15.11\nPodCIDR:                     10.244.0.0/24\nNon-terminated Pods:         (9 in total)\n  Namespace                  Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                  ----                                           ------------  ----------  ---------------  -------------  ---\n  kube-system                coredns-5d4dd4b4db-7vx76                       100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     28h\n  kube-system                coredns-5d4dd4b4db-mbldf                       100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     28h\n  kube-system                etcd-iruya-control-plane                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         28h\n  kube-system                kindnet-87x6m                                  100m (0%)     100m (0%)   50Mi (0%)        50Mi (0%)      28h\n  kube-system                kube-apiserver-iruya-control-plane             250m (1%)     0 (0%)      0 (0%)           0 (0%)         28h\n  kube-system                kube-controller-manager-iruya-control-plane    200m (1%)     0 (0%)      0 (0%)           0 (0%)         28h\n  kube-system                kube-proxy-n277x                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         28h\n  kube-system                kube-scheduler-iruya-control-plane             100m (0%)     0 (0%)      0 (0%)           0 (0%)         28h\n  local-path-storage         local-path-provisioner-668779bd7-99cxr         0 (0%)        0 (0%)      0 (0%)           0 (0%)         28h\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests    Limits\n  --------           --------    ------\n  cpu                850m (5%)   100m (0%)\n  memory             190Mi (0%)  390Mi (0%)\n  ephemeral-storage  0 (0%)      0 (0%)\nEvents:              \n"
Jul  5 13:56:02.249: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-9331'
Jul  5 13:56:02.360: INFO: stderr: ""
Jul  5 13:56:02.360: INFO: stdout: "Name:         kubectl-9331\nLabels:       e2e-framework=kubectl\n              e2e-run=7631942a-f1ca-46b8-a9fe-6e83c7f4dcb8\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo resource limits.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:56:02.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9331" for this suite.
Jul  5 13:56:24.396: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:56:24.471: INFO: namespace kubectl-9331 deletion completed in 22.107556892s

• [SLOW TEST:29.908 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:56:24.471: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-projected-all-test-volume-d7fac9c1-bcf5-4cdd-920a-7d647fb89544
STEP: Creating secret with name secret-projected-all-test-volume-7abda8b7-8836-4411-9be9-a3363a9ad3c1
STEP: Creating a pod to test Check all projections for projected volume plugin
Jul  5 13:56:24.597: INFO: Waiting up to 5m0s for pod "projected-volume-d06237c5-dcef-42ed-abfb-550304ebbc95" in namespace "projected-4405" to be "success or failure"
Jul  5 13:56:24.600: INFO: Pod "projected-volume-d06237c5-dcef-42ed-abfb-550304ebbc95": Phase="Pending", Reason="", readiness=false. Elapsed: 2.676956ms
Jul  5 13:56:26.604: INFO: Pod "projected-volume-d06237c5-dcef-42ed-abfb-550304ebbc95": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006947781s
Jul  5 13:56:28.609: INFO: Pod "projected-volume-d06237c5-dcef-42ed-abfb-550304ebbc95": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011728429s
STEP: Saw pod success
Jul  5 13:56:28.609: INFO: Pod "projected-volume-d06237c5-dcef-42ed-abfb-550304ebbc95" satisfied condition "success or failure"
Jul  5 13:56:28.612: INFO: Trying to get logs from node iruya-worker pod projected-volume-d06237c5-dcef-42ed-abfb-550304ebbc95 container projected-all-volume-test: 
STEP: delete the pod
Jul  5 13:56:28.648: INFO: Waiting for pod projected-volume-d06237c5-dcef-42ed-abfb-550304ebbc95 to disappear
Jul  5 13:56:28.655: INFO: Pod projected-volume-d06237c5-dcef-42ed-abfb-550304ebbc95 no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:56:28.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4405" for this suite.
Jul  5 13:56:34.684: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:56:34.765: INFO: namespace projected-4405 deletion completed in 6.106219516s

• [SLOW TEST:10.294 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:56:34.765: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-downwardapi-psh2
STEP: Creating a pod to test atomic-volume-subpath
Jul  5 13:56:34.909: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-psh2" in namespace "subpath-8352" to be "success or failure"
Jul  5 13:56:34.913: INFO: Pod "pod-subpath-test-downwardapi-psh2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.785389ms
Jul  5 13:56:36.917: INFO: Pod "pod-subpath-test-downwardapi-psh2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007381181s
Jul  5 13:56:38.921: INFO: Pod "pod-subpath-test-downwardapi-psh2": Phase="Running", Reason="", readiness=true. Elapsed: 4.011747691s
Jul  5 13:56:40.925: INFO: Pod "pod-subpath-test-downwardapi-psh2": Phase="Running", Reason="", readiness=true. Elapsed: 6.01619241s
Jul  5 13:56:42.930: INFO: Pod "pod-subpath-test-downwardapi-psh2": Phase="Running", Reason="", readiness=true. Elapsed: 8.021175934s
Jul  5 13:56:44.935: INFO: Pod "pod-subpath-test-downwardapi-psh2": Phase="Running", Reason="", readiness=true. Elapsed: 10.025653126s
Jul  5 13:56:46.939: INFO: Pod "pod-subpath-test-downwardapi-psh2": Phase="Running", Reason="", readiness=true. Elapsed: 12.03030886s
Jul  5 13:56:48.942: INFO: Pod "pod-subpath-test-downwardapi-psh2": Phase="Running", Reason="", readiness=true. Elapsed: 14.033251123s
Jul  5 13:56:50.947: INFO: Pod "pod-subpath-test-downwardapi-psh2": Phase="Running", Reason="", readiness=true. Elapsed: 16.037788267s
Jul  5 13:56:52.950: INFO: Pod "pod-subpath-test-downwardapi-psh2": Phase="Running", Reason="", readiness=true. Elapsed: 18.040771634s
Jul  5 13:56:54.954: INFO: Pod "pod-subpath-test-downwardapi-psh2": Phase="Running", Reason="", readiness=true. Elapsed: 20.045325675s
Jul  5 13:56:56.959: INFO: Pod "pod-subpath-test-downwardapi-psh2": Phase="Running", Reason="", readiness=true. Elapsed: 22.049736998s
Jul  5 13:56:58.963: INFO: Pod "pod-subpath-test-downwardapi-psh2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.054258265s
STEP: Saw pod success
Jul  5 13:56:58.963: INFO: Pod "pod-subpath-test-downwardapi-psh2" satisfied condition "success or failure"
Jul  5 13:56:58.967: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-downwardapi-psh2 container test-container-subpath-downwardapi-psh2: 
STEP: delete the pod
Jul  5 13:56:59.075: INFO: Waiting for pod pod-subpath-test-downwardapi-psh2 to disappear
Jul  5 13:56:59.086: INFO: Pod pod-subpath-test-downwardapi-psh2 no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-psh2
Jul  5 13:56:59.086: INFO: Deleting pod "pod-subpath-test-downwardapi-psh2" in namespace "subpath-8352"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:56:59.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-8352" for this suite.
Jul  5 13:57:05.110: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:57:05.203: INFO: namespace subpath-8352 deletion completed in 6.10808873s

• [SLOW TEST:30.437 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:57:05.203: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jul  5 13:57:05.291: INFO: (0) /api/v1/nodes/iruya-worker:10250/proxy/logs/: 
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jul  5 13:57:11.563: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Jul  5 13:57:11.642: INFO: Number of nodes with available pods: 0
Jul  5 13:57:11.642: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Jul  5 13:57:11.692: INFO: Number of nodes with available pods: 0
Jul  5 13:57:11.692: INFO: Node iruya-worker is running more than one daemon pod
Jul  5 13:57:12.744: INFO: Number of nodes with available pods: 0
Jul  5 13:57:12.744: INFO: Node iruya-worker is running more than one daemon pod
Jul  5 13:57:13.696: INFO: Number of nodes with available pods: 0
Jul  5 13:57:13.696: INFO: Node iruya-worker is running more than one daemon pod
Jul  5 13:57:14.695: INFO: Number of nodes with available pods: 1
Jul  5 13:57:14.695: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Jul  5 13:57:14.726: INFO: Number of nodes with available pods: 1
Jul  5 13:57:14.726: INFO: Number of running nodes: 0, number of available pods: 1
Jul  5 13:57:15.731: INFO: Number of nodes with available pods: 0
Jul  5 13:57:15.731: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Jul  5 13:57:15.753: INFO: Number of nodes with available pods: 0
Jul  5 13:57:15.753: INFO: Node iruya-worker is running more than one daemon pod
Jul  5 13:57:16.757: INFO: Number of nodes with available pods: 0
Jul  5 13:57:16.757: INFO: Node iruya-worker is running more than one daemon pod
Jul  5 13:57:17.757: INFO: Number of nodes with available pods: 0
Jul  5 13:57:17.757: INFO: Node iruya-worker is running more than one daemon pod
Jul  5 13:57:18.757: INFO: Number of nodes with available pods: 0
Jul  5 13:57:18.757: INFO: Node iruya-worker is running more than one daemon pod
Jul  5 13:57:19.757: INFO: Number of nodes with available pods: 0
Jul  5 13:57:19.757: INFO: Node iruya-worker is running more than one daemon pod
Jul  5 13:57:21.367: INFO: Number of nodes with available pods: 0
Jul  5 13:57:21.367: INFO: Node iruya-worker is running more than one daemon pod
Jul  5 13:57:22.848: INFO: Number of nodes with available pods: 0
Jul  5 13:57:22.848: INFO: Node iruya-worker is running more than one daemon pod
Jul  5 13:57:24.079: INFO: Number of nodes with available pods: 0
Jul  5 13:57:24.079: INFO: Node iruya-worker is running more than one daemon pod
Jul  5 13:57:24.757: INFO: Number of nodes with available pods: 0
Jul  5 13:57:24.757: INFO: Node iruya-worker is running more than one daemon pod
Jul  5 13:57:25.757: INFO: Number of nodes with available pods: 0
Jul  5 13:57:25.757: INFO: Node iruya-worker is running more than one daemon pod
Jul  5 13:57:26.757: INFO: Number of nodes with available pods: 0
Jul  5 13:57:26.757: INFO: Node iruya-worker is running more than one daemon pod
Jul  5 13:57:27.780: INFO: Number of nodes with available pods: 0
Jul  5 13:57:27.781: INFO: Node iruya-worker is running more than one daemon pod
Jul  5 13:57:28.757: INFO: Number of nodes with available pods: 0
Jul  5 13:57:28.757: INFO: Node iruya-worker is running more than one daemon pod
Jul  5 13:57:29.757: INFO: Number of nodes with available pods: 1
Jul  5 13:57:29.757: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2445, will wait for the garbage collector to delete the pods
Jul  5 13:57:29.823: INFO: Deleting DaemonSet.extensions daemon-set took: 7.023251ms
Jul  5 13:57:30.123: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.249261ms
Jul  5 13:57:35.931: INFO: Number of nodes with available pods: 0
Jul  5 13:57:35.931: INFO: Number of running nodes: 0, number of available pods: 0
Jul  5 13:57:35.933: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2445/daemonsets","resourceVersion":"242455"},"items":null}

Jul  5 13:57:35.935: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2445/pods","resourceVersion":"242455"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:57:35.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-2445" for this suite.
Jul  5 13:57:42.012: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:57:42.091: INFO: namespace daemonsets-2445 deletion completed in 6.092053929s

• [SLOW TEST:30.640 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:57:42.091: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8672.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8672.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8672.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8672.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jul  5 13:57:48.187: INFO: DNS probes using dns-test-4f6def78-d043-4d80-9ae4-22a2992a8651 succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8672.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8672.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8672.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8672.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jul  5 13:57:54.290: INFO: File jessie_udp@dns-test-service-3.dns-8672.svc.cluster.local from pod  dns-8672/dns-test-ef1b8753-852f-4a1f-bf81-3a1def8784c6 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jul  5 13:57:54.290: INFO: Lookups using dns-8672/dns-test-ef1b8753-852f-4a1f-bf81-3a1def8784c6 failed for: [jessie_udp@dns-test-service-3.dns-8672.svc.cluster.local]

Jul  5 13:57:59.299: INFO: DNS probes using dns-test-ef1b8753-852f-4a1f-bf81-3a1def8784c6 succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8672.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-8672.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8672.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-8672.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jul  5 13:58:05.972: INFO: File wheezy_udp@dns-test-service-3.dns-8672.svc.cluster.local from pod  dns-8672/dns-test-c21a42d8-d989-4c74-bfe4-09bf37d2ffa5 contains '' instead of '10.109.175.66'
Jul  5 13:58:05.975: INFO: Lookups using dns-8672/dns-test-c21a42d8-d989-4c74-bfe4-09bf37d2ffa5 failed for: [wheezy_udp@dns-test-service-3.dns-8672.svc.cluster.local]

Jul  5 13:58:10.984: INFO: DNS probes using dns-test-c21a42d8-d989-4c74-bfe4-09bf37d2ffa5 succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:58:11.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-8672" for this suite.
Jul  5 13:58:17.134: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:58:17.214: INFO: namespace dns-8672 deletion completed in 6.107921043s

• [SLOW TEST:35.123 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:58:17.215: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Starting the proxy
Jul  5 13:58:17.294: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix252318000/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:58:17.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4431" for this suite.
Jul  5 13:58:23.411: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:58:23.506: INFO: namespace kubectl-4431 deletion completed in 6.132225407s

• [SLOW TEST:6.291 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support --unix-socket=/path  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:58:23.506: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Jul  5 13:58:23.609: INFO: Waiting up to 5m0s for pod "downward-api-2f5d04b8-416f-41c3-bff3-3171d97a5b41" in namespace "downward-api-3853" to be "success or failure"
Jul  5 13:58:23.621: INFO: Pod "downward-api-2f5d04b8-416f-41c3-bff3-3171d97a5b41": Phase="Pending", Reason="", readiness=false. Elapsed: 11.845196ms
Jul  5 13:58:25.625: INFO: Pod "downward-api-2f5d04b8-416f-41c3-bff3-3171d97a5b41": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016047137s
Jul  5 13:58:27.685: INFO: Pod "downward-api-2f5d04b8-416f-41c3-bff3-3171d97a5b41": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.075201888s
STEP: Saw pod success
Jul  5 13:58:27.685: INFO: Pod "downward-api-2f5d04b8-416f-41c3-bff3-3171d97a5b41" satisfied condition "success or failure"
Jul  5 13:58:27.688: INFO: Trying to get logs from node iruya-worker2 pod downward-api-2f5d04b8-416f-41c3-bff3-3171d97a5b41 container dapi-container: 
STEP: delete the pod
Jul  5 13:58:27.754: INFO: Waiting for pod downward-api-2f5d04b8-416f-41c3-bff3-3171d97a5b41 to disappear
Jul  5 13:58:27.811: INFO: Pod downward-api-2f5d04b8-416f-41c3-bff3-3171d97a5b41 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:58:27.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3853" for this suite.
Jul  5 13:58:33.830: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:58:33.906: INFO: namespace downward-api-3853 deletion completed in 6.091819593s

• [SLOW TEST:10.400 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:58:33.907: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Jul  5 13:58:33.988: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:58:41.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-4777" for this suite.
Jul  5 13:59:04.022: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:59:04.104: INFO: namespace init-container-4777 deletion completed in 22.100617371s

• [SLOW TEST:30.197 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:59:04.104: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with configMap that has name projected-configmap-test-upd-03e95d78-d055-4eed-a513-7d13230e2315
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-03e95d78-d055-4eed-a513-7d13230e2315
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:59:12.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9804" for this suite.
Jul  5 13:59:34.315: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:59:34.443: INFO: namespace projected-9804 deletion completed in 22.157492824s

• [SLOW TEST:30.339 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:59:34.444: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jul  5 13:59:34.508: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7a198b6a-cdce-4309-8718-a16c2a8fd643" in namespace "downward-api-7144" to be "success or failure"
Jul  5 13:59:34.541: INFO: Pod "downwardapi-volume-7a198b6a-cdce-4309-8718-a16c2a8fd643": Phase="Pending", Reason="", readiness=false. Elapsed: 33.569679ms
Jul  5 13:59:36.546: INFO: Pod "downwardapi-volume-7a198b6a-cdce-4309-8718-a16c2a8fd643": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037921113s
Jul  5 13:59:38.560: INFO: Pod "downwardapi-volume-7a198b6a-cdce-4309-8718-a16c2a8fd643": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051709466s
STEP: Saw pod success
Jul  5 13:59:38.560: INFO: Pod "downwardapi-volume-7a198b6a-cdce-4309-8718-a16c2a8fd643" satisfied condition "success or failure"
Jul  5 13:59:38.563: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-7a198b6a-cdce-4309-8718-a16c2a8fd643 container client-container: 
STEP: delete the pod
Jul  5 13:59:38.612: INFO: Waiting for pod downwardapi-volume-7a198b6a-cdce-4309-8718-a16c2a8fd643 to disappear
Jul  5 13:59:38.614: INFO: Pod downwardapi-volume-7a198b6a-cdce-4309-8718-a16c2a8fd643 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:59:38.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7144" for this suite.
Jul  5 13:59:44.644: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:59:44.722: INFO: namespace downward-api-7144 deletion completed in 6.104371449s

• [SLOW TEST:10.278 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:59:44.722: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test use defaults
Jul  5 13:59:44.853: INFO: Waiting up to 5m0s for pod "client-containers-6a53701a-05ca-4b2c-8b66-725f1f8f8c1a" in namespace "containers-4056" to be "success or failure"
Jul  5 13:59:44.862: INFO: Pod "client-containers-6a53701a-05ca-4b2c-8b66-725f1f8f8c1a": Phase="Pending", Reason="", readiness=false. Elapsed: 9.134231ms
Jul  5 13:59:46.866: INFO: Pod "client-containers-6a53701a-05ca-4b2c-8b66-725f1f8f8c1a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012964526s
Jul  5 13:59:48.870: INFO: Pod "client-containers-6a53701a-05ca-4b2c-8b66-725f1f8f8c1a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017252216s
STEP: Saw pod success
Jul  5 13:59:48.870: INFO: Pod "client-containers-6a53701a-05ca-4b2c-8b66-725f1f8f8c1a" satisfied condition "success or failure"
Jul  5 13:59:48.873: INFO: Trying to get logs from node iruya-worker pod client-containers-6a53701a-05ca-4b2c-8b66-725f1f8f8c1a container test-container: 
STEP: delete the pod
Jul  5 13:59:48.924: INFO: Waiting for pod client-containers-6a53701a-05ca-4b2c-8b66-725f1f8f8c1a to disappear
Jul  5 13:59:48.946: INFO: Pod client-containers-6a53701a-05ca-4b2c-8b66-725f1f8f8c1a no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 13:59:48.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-4056" for this suite.
Jul  5 13:59:54.988: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 13:59:55.066: INFO: namespace containers-4056 deletion completed in 6.116717837s

• [SLOW TEST:10.344 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 13:59:55.067: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Jul  5 13:59:55.121: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Jul  5 14:00:04.169: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 14:00:04.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4490" for this suite.
Jul  5 14:00:10.224: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 14:00:10.308: INFO: namespace pods-4490 deletion completed in 6.112376226s

• [SLOW TEST:15.241 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 14:00:10.308: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jul  5 14:00:10.430: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f01874cd-a22f-4733-b7ef-556da771ce51" in namespace "downward-api-3439" to be "success or failure"
Jul  5 14:00:10.433: INFO: Pod "downwardapi-volume-f01874cd-a22f-4733-b7ef-556da771ce51": Phase="Pending", Reason="", readiness=false. Elapsed: 2.482112ms
Jul  5 14:00:12.437: INFO: Pod "downwardapi-volume-f01874cd-a22f-4733-b7ef-556da771ce51": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007047078s
Jul  5 14:00:14.442: INFO: Pod "downwardapi-volume-f01874cd-a22f-4733-b7ef-556da771ce51": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011442297s
STEP: Saw pod success
Jul  5 14:00:14.442: INFO: Pod "downwardapi-volume-f01874cd-a22f-4733-b7ef-556da771ce51" satisfied condition "success or failure"
Jul  5 14:00:14.445: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-f01874cd-a22f-4733-b7ef-556da771ce51 container client-container: 
STEP: delete the pod
Jul  5 14:00:14.464: INFO: Waiting for pod downwardapi-volume-f01874cd-a22f-4733-b7ef-556da771ce51 to disappear
Jul  5 14:00:14.486: INFO: Pod downwardapi-volume-f01874cd-a22f-4733-b7ef-556da771ce51 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 14:00:14.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3439" for this suite.
Jul  5 14:00:20.509: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 14:00:20.593: INFO: namespace downward-api-3439 deletion completed in 6.103778241s

• [SLOW TEST:10.285 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 14:00:20.595: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jul  5 14:00:28.774: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul  5 14:00:28.779: INFO: Pod pod-with-poststart-exec-hook still exists
Jul  5 14:00:30.779: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul  5 14:00:30.784: INFO: Pod pod-with-poststart-exec-hook still exists
Jul  5 14:00:32.779: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul  5 14:00:32.784: INFO: Pod pod-with-poststart-exec-hook still exists
Jul  5 14:00:34.779: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul  5 14:00:34.782: INFO: Pod pod-with-poststart-exec-hook still exists
Jul  5 14:00:36.779: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul  5 14:00:36.784: INFO: Pod pod-with-poststart-exec-hook still exists
Jul  5 14:00:38.779: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul  5 14:00:38.784: INFO: Pod pod-with-poststart-exec-hook still exists
Jul  5 14:00:40.779: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul  5 14:00:40.784: INFO: Pod pod-with-poststart-exec-hook still exists
Jul  5 14:00:42.779: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul  5 14:00:42.783: INFO: Pod pod-with-poststart-exec-hook still exists
Jul  5 14:00:44.779: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul  5 14:00:44.784: INFO: Pod pod-with-poststart-exec-hook still exists
Jul  5 14:00:46.779: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul  5 14:00:46.783: INFO: Pod pod-with-poststart-exec-hook still exists
Jul  5 14:00:48.779: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul  5 14:00:48.784: INFO: Pod pod-with-poststart-exec-hook still exists
Jul  5 14:00:50.779: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul  5 14:00:50.783: INFO: Pod pod-with-poststart-exec-hook still exists
Jul  5 14:00:52.779: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul  5 14:00:52.783: INFO: Pod pod-with-poststart-exec-hook still exists
Jul  5 14:00:54.779: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul  5 14:00:54.783: INFO: Pod pod-with-poststart-exec-hook still exists
Jul  5 14:00:56.779: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul  5 14:00:56.784: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 14:00:56.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-9059" for this suite.
Jul  5 14:01:18.803: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 14:01:18.880: INFO: namespace container-lifecycle-hook-9059 deletion completed in 22.092434866s

• [SLOW TEST:58.286 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 14:01:18.880: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-8bc74a50-a9bd-485a-8d1d-0c0aaf36ce7d
STEP: Creating a pod to test consume secrets
Jul  5 14:01:18.967: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-70f50a45-4774-4167-a16a-77fd1537ea2b" in namespace "projected-3192" to be "success or failure"
Jul  5 14:01:18.987: INFO: Pod "pod-projected-secrets-70f50a45-4774-4167-a16a-77fd1537ea2b": Phase="Pending", Reason="", readiness=false. Elapsed: 19.519747ms
Jul  5 14:01:20.991: INFO: Pod "pod-projected-secrets-70f50a45-4774-4167-a16a-77fd1537ea2b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023870313s
Jul  5 14:01:22.995: INFO: Pod "pod-projected-secrets-70f50a45-4774-4167-a16a-77fd1537ea2b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028190108s
Jul  5 14:01:25.000: INFO: Pod "pod-projected-secrets-70f50a45-4774-4167-a16a-77fd1537ea2b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.032490315s
STEP: Saw pod success
Jul  5 14:01:25.000: INFO: Pod "pod-projected-secrets-70f50a45-4774-4167-a16a-77fd1537ea2b" satisfied condition "success or failure"
Jul  5 14:01:25.003: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-70f50a45-4774-4167-a16a-77fd1537ea2b container projected-secret-volume-test: 
STEP: delete the pod
Jul  5 14:01:25.021: INFO: Waiting for pod pod-projected-secrets-70f50a45-4774-4167-a16a-77fd1537ea2b to disappear
Jul  5 14:01:25.025: INFO: Pod pod-projected-secrets-70f50a45-4774-4167-a16a-77fd1537ea2b no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 14:01:25.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3192" for this suite.
Jul  5 14:01:31.471: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 14:01:31.541: INFO: namespace projected-3192 deletion completed in 6.513609644s

• [SLOW TEST:12.661 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 14:01:31.542: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jul  5 14:01:32.494: INFO: Creating deployment "test-recreate-deployment"
Jul  5 14:01:32.562: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Jul  5 14:01:32.591: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
Jul  5 14:01:34.659: INFO: Waiting deployment "test-recreate-deployment" to complete
Jul  5 14:01:34.725: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729554492, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729554492, loc:(*time.Location)(0x7eb18c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729554492, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729554492, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  5 14:01:36.728: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Jul  5 14:01:36.734: INFO: Updating deployment test-recreate-deployment
Jul  5 14:01:36.734: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Jul  5 14:01:37.321: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-797,SelfLink:/apis/apps/v1/namespaces/deployment-797/deployments/test-recreate-deployment,UID:71bb14dd-5c13-4095-b8cc-00c8b29354de,ResourceVersion:243361,Generation:2,CreationTimestamp:2020-07-05 14:01:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-07-05 14:01:36 +0000 UTC 2020-07-05 14:01:36 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-07-05 14:01:37 +0000 UTC 2020-07-05 14:01:32 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},}

Jul  5 14:01:37.454: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-797,SelfLink:/apis/apps/v1/namespaces/deployment-797/replicasets/test-recreate-deployment-5c8c9cc69d,UID:0e6becd2-783c-42a4-82e6-5e16fe3b0bd6,ResourceVersion:243359,Generation:1,CreationTimestamp:2020-07-05 14:01:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 71bb14dd-5c13-4095-b8cc-00c8b29354de 0xc001689167 0xc001689168}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jul  5 14:01:37.454: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Jul  5 14:01:37.454: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-797,SelfLink:/apis/apps/v1/namespaces/deployment-797/replicasets/test-recreate-deployment-6df85df6b9,UID:c85b1dc7-91a2-4c12-b245-44a47a731948,ResourceVersion:243349,Generation:2,CreationTimestamp:2020-07-05 14:01:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 71bb14dd-5c13-4095-b8cc-00c8b29354de 0xc001689237 0xc001689238}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jul  5 14:01:37.458: INFO: Pod "test-recreate-deployment-5c8c9cc69d-srpn7" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-srpn7,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-797,SelfLink:/api/v1/namespaces/deployment-797/pods/test-recreate-deployment-5c8c9cc69d-srpn7,UID:f80fc2a4-c80d-4ec2-98a1-2e0a88f85f99,ResourceVersion:243362,Generation:0,CreationTimestamp:2020-07-05 14:01:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d 0e6becd2-783c-42a4-82e6-5e16fe3b0bd6 0xc0030b2c07 0xc0030b2c08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-52bgd {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-52bgd,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-52bgd true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0030b2c80} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0030b2ca0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:01:37 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:01:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:01:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:01:36 +0000 UTC  }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-07-05 14:01:37 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 14:01:37.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-797" for this suite.
Jul  5 14:01:43.567: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 14:01:43.654: INFO: namespace deployment-797 deletion completed in 6.193993395s

• [SLOW TEST:12.113 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 14:01:43.655: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5532.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-5532.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5532.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5532.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-5532.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5532.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jul  5 14:01:51.823: INFO: DNS probes using dns-5532/dns-test-1a2b1832-9677-4031-9100-255c178efeb6 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 14:01:51.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-5532" for this suite.
Jul  5 14:01:57.891: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 14:01:57.970: INFO: namespace dns-5532 deletion completed in 6.099140045s

• [SLOW TEST:14.315 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 14:01:57.971: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-c4df8574-f111-47fb-8751-76598e2eea37
STEP: Creating a pod to test consume configMaps
Jul  5 14:01:58.040: INFO: Waiting up to 5m0s for pod "pod-configmaps-8b2cbff6-4ee6-4596-8e49-68c549871092" in namespace "configmap-3018" to be "success or failure"
Jul  5 14:01:58.044: INFO: Pod "pod-configmaps-8b2cbff6-4ee6-4596-8e49-68c549871092": Phase="Pending", Reason="", readiness=false. Elapsed: 3.663091ms
Jul  5 14:02:00.148: INFO: Pod "pod-configmaps-8b2cbff6-4ee6-4596-8e49-68c549871092": Phase="Pending", Reason="", readiness=false. Elapsed: 2.107574292s
Jul  5 14:02:02.151: INFO: Pod "pod-configmaps-8b2cbff6-4ee6-4596-8e49-68c549871092": Phase="Running", Reason="", readiness=true. Elapsed: 4.111222868s
Jul  5 14:02:04.156: INFO: Pod "pod-configmaps-8b2cbff6-4ee6-4596-8e49-68c549871092": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.115793424s
STEP: Saw pod success
Jul  5 14:02:04.156: INFO: Pod "pod-configmaps-8b2cbff6-4ee6-4596-8e49-68c549871092" satisfied condition "success or failure"
Jul  5 14:02:04.159: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-8b2cbff6-4ee6-4596-8e49-68c549871092 container configmap-volume-test: 
STEP: delete the pod
Jul  5 14:02:04.194: INFO: Waiting for pod pod-configmaps-8b2cbff6-4ee6-4596-8e49-68c549871092 to disappear
Jul  5 14:02:04.208: INFO: Pod pod-configmaps-8b2cbff6-4ee6-4596-8e49-68c549871092 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 14:02:04.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3018" for this suite.
Jul  5 14:02:10.242: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 14:02:10.316: INFO: namespace configmap-3018 deletion completed in 6.105136291s

• [SLOW TEST:12.346 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 14:02:10.317: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting the proxy server
Jul  5 14:02:10.350: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 14:02:10.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3994" for this suite.
Jul  5 14:02:16.451: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 14:02:16.529: INFO: namespace kubectl-3994 deletion completed in 6.08740576s

• [SLOW TEST:6.212 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 14:02:16.529: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0705 14:02:29.454078       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jul  5 14:02:29.454: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 14:02:29.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-7905" for this suite.
Jul  5 14:02:39.549: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 14:02:39.627: INFO: namespace gc-7905 deletion completed in 10.169460627s

• [SLOW TEST:23.098 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 14:02:39.627: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jul  5 14:02:39.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-4552'
Jul  5 14:02:39.825: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jul  5 14:02:39.825: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Jul  5 14:02:39.860: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-hnzqs]
Jul  5 14:02:39.860: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-hnzqs" in namespace "kubectl-4552" to be "running and ready"
Jul  5 14:02:39.951: INFO: Pod "e2e-test-nginx-rc-hnzqs": Phase="Pending", Reason="", readiness=false. Elapsed: 90.832948ms
Jul  5 14:02:41.955: INFO: Pod "e2e-test-nginx-rc-hnzqs": Phase="Pending", Reason="", readiness=false. Elapsed: 2.095207272s
Jul  5 14:02:43.959: INFO: Pod "e2e-test-nginx-rc-hnzqs": Phase="Running", Reason="", readiness=true. Elapsed: 4.099395666s
Jul  5 14:02:43.959: INFO: Pod "e2e-test-nginx-rc-hnzqs" satisfied condition "running and ready"
Jul  5 14:02:43.959: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-hnzqs]
Jul  5 14:02:43.959: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-4552'
Jul  5 14:02:44.072: INFO: stderr: ""
Jul  5 14:02:44.072: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461
Jul  5 14:02:44.072: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-4552'
Jul  5 14:02:44.181: INFO: stderr: ""
Jul  5 14:02:44.181: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 14:02:44.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4552" for this suite.
Jul  5 14:02:50.247: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 14:02:50.329: INFO: namespace kubectl-4552 deletion completed in 6.09555672s

• [SLOW TEST:10.702 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 14:02:50.330: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod busybox-562a8ce7-40b1-4d6a-9512-faabb76dde6f in namespace container-probe-2009
Jul  5 14:02:54.448: INFO: Started pod busybox-562a8ce7-40b1-4d6a-9512-faabb76dde6f in namespace container-probe-2009
STEP: checking the pod's current state and verifying that restartCount is present
Jul  5 14:02:54.451: INFO: Initial restart count of pod busybox-562a8ce7-40b1-4d6a-9512-faabb76dde6f is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 14:06:55.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-2009" for this suite.
Jul  5 14:07:01.783: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 14:07:01.859: INFO: namespace container-probe-2009 deletion completed in 6.181101015s

• [SLOW TEST:251.530 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 14:07:01.859: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-6ec5ee21-94b0-4fd4-9153-59fe1a9bf4db
STEP: Creating a pod to test consume configMaps
Jul  5 14:07:01.959: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d9c1d4e7-b74e-4a45-b86d-bf6fb473045d" in namespace "projected-2603" to be "success or failure"
Jul  5 14:07:01.983: INFO: Pod "pod-projected-configmaps-d9c1d4e7-b74e-4a45-b86d-bf6fb473045d": Phase="Pending", Reason="", readiness=false. Elapsed: 23.969728ms
Jul  5 14:07:04.109: INFO: Pod "pod-projected-configmaps-d9c1d4e7-b74e-4a45-b86d-bf6fb473045d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.150522902s
Jul  5 14:07:06.113: INFO: Pod "pod-projected-configmaps-d9c1d4e7-b74e-4a45-b86d-bf6fb473045d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.154467498s
STEP: Saw pod success
Jul  5 14:07:06.113: INFO: Pod "pod-projected-configmaps-d9c1d4e7-b74e-4a45-b86d-bf6fb473045d" satisfied condition "success or failure"
Jul  5 14:07:06.116: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-d9c1d4e7-b74e-4a45-b86d-bf6fb473045d container projected-configmap-volume-test: 
STEP: delete the pod
Jul  5 14:07:06.160: INFO: Waiting for pod pod-projected-configmaps-d9c1d4e7-b74e-4a45-b86d-bf6fb473045d to disappear
Jul  5 14:07:06.264: INFO: Pod pod-projected-configmaps-d9c1d4e7-b74e-4a45-b86d-bf6fb473045d no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 14:07:06.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2603" for this suite.
Jul  5 14:07:12.284: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 14:07:12.406: INFO: namespace projected-2603 deletion completed in 6.138485176s

• [SLOW TEST:10.547 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 14:07:12.407: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jul  5 14:07:12.538: INFO: Create a RollingUpdate DaemonSet
Jul  5 14:07:12.542: INFO: Check that daemon pods launch on every node of the cluster
Jul  5 14:07:12.547: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 14:07:12.556: INFO: Number of nodes with available pods: 0
Jul  5 14:07:12.556: INFO: Node iruya-worker is running more than one daemon pod
Jul  5 14:07:13.561: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 14:07:13.564: INFO: Number of nodes with available pods: 0
Jul  5 14:07:13.564: INFO: Node iruya-worker is running more than one daemon pod
Jul  5 14:07:14.943: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 14:07:14.946: INFO: Number of nodes with available pods: 0
Jul  5 14:07:14.946: INFO: Node iruya-worker is running more than one daemon pod
Jul  5 14:07:15.560: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 14:07:15.564: INFO: Number of nodes with available pods: 0
Jul  5 14:07:15.564: INFO: Node iruya-worker is running more than one daemon pod
Jul  5 14:07:16.561: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 14:07:16.565: INFO: Number of nodes with available pods: 0
Jul  5 14:07:16.565: INFO: Node iruya-worker is running more than one daemon pod
Jul  5 14:07:19.098: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 14:07:19.151: INFO: Number of nodes with available pods: 2
Jul  5 14:07:19.151: INFO: Number of running nodes: 2, number of available pods: 2
Jul  5 14:07:19.151: INFO: Update the DaemonSet to trigger a rollout
Jul  5 14:07:19.192: INFO: Updating DaemonSet daemon-set
Jul  5 14:07:22.336: INFO: Roll back the DaemonSet before rollout is complete
Jul  5 14:07:22.342: INFO: Updating DaemonSet daemon-set
Jul  5 14:07:22.342: INFO: Make sure DaemonSet rollback is complete
Jul  5 14:07:22.349: INFO: Wrong image for pod: daemon-set-88fzk. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Jul  5 14:07:22.349: INFO: Pod daemon-set-88fzk is not available
Jul  5 14:07:22.377: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 14:07:23.451: INFO: Wrong image for pod: daemon-set-88fzk. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Jul  5 14:07:23.451: INFO: Pod daemon-set-88fzk is not available
Jul  5 14:07:23.455: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 14:07:24.397: INFO: Wrong image for pod: daemon-set-88fzk. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Jul  5 14:07:24.397: INFO: Pod daemon-set-88fzk is not available
Jul  5 14:07:24.463: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 14:07:25.381: INFO: Pod daemon-set-g77sk is not available
Jul  5 14:07:25.386: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3689, will wait for the garbage collector to delete the pods
Jul  5 14:07:25.453: INFO: Deleting DaemonSet.extensions daemon-set took: 7.520383ms
Jul  5 14:07:25.754: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.263196ms
Jul  5 14:07:35.957: INFO: Number of nodes with available pods: 0
Jul  5 14:07:35.957: INFO: Number of running nodes: 0, number of available pods: 0
Jul  5 14:07:35.960: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3689/daemonsets","resourceVersion":"244505"},"items":null}

Jul  5 14:07:35.962: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3689/pods","resourceVersion":"244505"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 14:07:35.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-3689" for this suite.
Jul  5 14:07:41.991: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 14:07:42.068: INFO: namespace daemonsets-3689 deletion completed in 6.092970099s

• [SLOW TEST:29.661 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 14:07:42.069: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating pod
Jul  5 14:07:46.159: INFO: Pod pod-hostip-11689a29-d8ce-4c99-b875-460c71021bf4 has hostIP: 172.17.0.6
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 14:07:46.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6329" for this suite.
Jul  5 14:08:08.197: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 14:08:08.273: INFO: namespace pods-6329 deletion completed in 22.110220456s

• [SLOW TEST:26.204 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 14:08:08.273: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jul  5 14:08:08.376: INFO: Waiting up to 5m0s for pod "downwardapi-volume-00489ee9-9979-41f5-95b7-a6b95a686324" in namespace "projected-5377" to be "success or failure"
Jul  5 14:08:08.385: INFO: Pod "downwardapi-volume-00489ee9-9979-41f5-95b7-a6b95a686324": Phase="Pending", Reason="", readiness=false. Elapsed: 9.22615ms
Jul  5 14:08:10.390: INFO: Pod "downwardapi-volume-00489ee9-9979-41f5-95b7-a6b95a686324": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013954249s
Jul  5 14:08:12.394: INFO: Pod "downwardapi-volume-00489ee9-9979-41f5-95b7-a6b95a686324": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018431567s
STEP: Saw pod success
Jul  5 14:08:12.394: INFO: Pod "downwardapi-volume-00489ee9-9979-41f5-95b7-a6b95a686324" satisfied condition "success or failure"
Jul  5 14:08:12.397: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-00489ee9-9979-41f5-95b7-a6b95a686324 container client-container: 
STEP: delete the pod
Jul  5 14:08:12.445: INFO: Waiting for pod downwardapi-volume-00489ee9-9979-41f5-95b7-a6b95a686324 to disappear
Jul  5 14:08:12.450: INFO: Pod downwardapi-volume-00489ee9-9979-41f5-95b7-a6b95a686324 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 14:08:12.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5377" for this suite.
Jul  5 14:08:18.466: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 14:08:18.538: INFO: namespace projected-5377 deletion completed in 6.084292741s

• [SLOW TEST:10.264 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 14:08:18.538: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Jul  5 14:08:26.782: INFO: 9 pods remaining
Jul  5 14:08:26.782: INFO: 0 pods has nil DeletionTimestamp
Jul  5 14:08:26.782: INFO: 
Jul  5 14:08:27.638: INFO: 0 pods remaining
Jul  5 14:08:27.638: INFO: 0 pods has nil DeletionTimestamp
Jul  5 14:08:27.638: INFO: 
Jul  5 14:08:28.454: INFO: 0 pods remaining
Jul  5 14:08:28.454: INFO: 0 pods has nil DeletionTimestamp
Jul  5 14:08:28.454: INFO: 
STEP: Gathering metrics
W0705 14:08:29.834860       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jul  5 14:08:29.834: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 14:08:29.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1082" for this suite.
Jul  5 14:08:35.935: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 14:08:36.014: INFO: namespace gc-1082 deletion completed in 6.176510479s

• [SLOW TEST:17.476 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 14:08:36.014: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Jul  5 14:08:36.112: INFO: Waiting up to 5m0s for pod "pod-2931c7bb-1c22-4b8e-bd92-4f93f9ce2bdf" in namespace "emptydir-2724" to be "success or failure"
Jul  5 14:08:36.121: INFO: Pod "pod-2931c7bb-1c22-4b8e-bd92-4f93f9ce2bdf": Phase="Pending", Reason="", readiness=false. Elapsed: 9.315911ms
Jul  5 14:08:38.235: INFO: Pod "pod-2931c7bb-1c22-4b8e-bd92-4f93f9ce2bdf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.123332505s
Jul  5 14:08:40.331: INFO: Pod "pod-2931c7bb-1c22-4b8e-bd92-4f93f9ce2bdf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.218832379s
STEP: Saw pod success
Jul  5 14:08:40.331: INFO: Pod "pod-2931c7bb-1c22-4b8e-bd92-4f93f9ce2bdf" satisfied condition "success or failure"
Jul  5 14:08:40.333: INFO: Trying to get logs from node iruya-worker pod pod-2931c7bb-1c22-4b8e-bd92-4f93f9ce2bdf container test-container: 
STEP: delete the pod
Jul  5 14:08:40.488: INFO: Waiting for pod pod-2931c7bb-1c22-4b8e-bd92-4f93f9ce2bdf to disappear
Jul  5 14:08:40.510: INFO: Pod pod-2931c7bb-1c22-4b8e-bd92-4f93f9ce2bdf no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 14:08:40.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2724" for this suite.
Jul  5 14:08:46.544: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 14:08:46.630: INFO: namespace emptydir-2724 deletion completed in 6.115647028s

• [SLOW TEST:10.615 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 14:08:46.630: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override arguments
Jul  5 14:08:46.752: INFO: Waiting up to 5m0s for pod "client-containers-4b3581f0-6d87-4f65-8bb0-36792301cad4" in namespace "containers-2558" to be "success or failure"
Jul  5 14:08:46.768: INFO: Pod "client-containers-4b3581f0-6d87-4f65-8bb0-36792301cad4": Phase="Pending", Reason="", readiness=false. Elapsed: 15.94012ms
Jul  5 14:08:48.834: INFO: Pod "client-containers-4b3581f0-6d87-4f65-8bb0-36792301cad4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081795979s
Jul  5 14:08:50.839: INFO: Pod "client-containers-4b3581f0-6d87-4f65-8bb0-36792301cad4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.086371856s
STEP: Saw pod success
Jul  5 14:08:50.839: INFO: Pod "client-containers-4b3581f0-6d87-4f65-8bb0-36792301cad4" satisfied condition "success or failure"
Jul  5 14:08:50.842: INFO: Trying to get logs from node iruya-worker2 pod client-containers-4b3581f0-6d87-4f65-8bb0-36792301cad4 container test-container: 
STEP: delete the pod
Jul  5 14:08:50.896: INFO: Waiting for pod client-containers-4b3581f0-6d87-4f65-8bb0-36792301cad4 to disappear
Jul  5 14:08:50.900: INFO: Pod client-containers-4b3581f0-6d87-4f65-8bb0-36792301cad4 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 14:08:50.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-2558" for this suite.
Jul  5 14:08:56.916: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 14:08:56.986: INFO: namespace containers-2558 deletion completed in 6.078890247s

• [SLOW TEST:10.356 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 14:08:56.986: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jul  5 14:08:57.053: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e4fae939-50d4-414f-9683-6927ff027966" in namespace "projected-8716" to be "success or failure"
Jul  5 14:08:57.062: INFO: Pod "downwardapi-volume-e4fae939-50d4-414f-9683-6927ff027966": Phase="Pending", Reason="", readiness=false. Elapsed: 9.111602ms
Jul  5 14:08:59.067: INFO: Pod "downwardapi-volume-e4fae939-50d4-414f-9683-6927ff027966": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013319666s
Jul  5 14:09:01.071: INFO: Pod "downwardapi-volume-e4fae939-50d4-414f-9683-6927ff027966": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017357449s
STEP: Saw pod success
Jul  5 14:09:01.071: INFO: Pod "downwardapi-volume-e4fae939-50d4-414f-9683-6927ff027966" satisfied condition "success or failure"
Jul  5 14:09:01.073: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-e4fae939-50d4-414f-9683-6927ff027966 container client-container: 
STEP: delete the pod
Jul  5 14:09:01.087: INFO: Waiting for pod downwardapi-volume-e4fae939-50d4-414f-9683-6927ff027966 to disappear
Jul  5 14:09:01.092: INFO: Pod downwardapi-volume-e4fae939-50d4-414f-9683-6927ff027966 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 14:09:01.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8716" for this suite.
Jul  5 14:09:07.108: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 14:09:07.190: INFO: namespace projected-8716 deletion completed in 6.095600024s

• [SLOW TEST:10.204 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 14:09:07.190: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-bbe4c39c-270c-401e-ae13-3eba2913bb51
STEP: Creating a pod to test consume secrets
Jul  5 14:09:07.305: INFO: Waiting up to 5m0s for pod "pod-secrets-8499323a-66db-44c3-9888-7e8b5b3469a8" in namespace "secrets-4581" to be "success or failure"
Jul  5 14:09:07.308: INFO: Pod "pod-secrets-8499323a-66db-44c3-9888-7e8b5b3469a8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.39426ms
Jul  5 14:09:09.312: INFO: Pod "pod-secrets-8499323a-66db-44c3-9888-7e8b5b3469a8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007865245s
Jul  5 14:09:11.317: INFO: Pod "pod-secrets-8499323a-66db-44c3-9888-7e8b5b3469a8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012233136s
STEP: Saw pod success
Jul  5 14:09:11.317: INFO: Pod "pod-secrets-8499323a-66db-44c3-9888-7e8b5b3469a8" satisfied condition "success or failure"
Jul  5 14:09:11.320: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-8499323a-66db-44c3-9888-7e8b5b3469a8 container secret-volume-test: 
STEP: delete the pod
Jul  5 14:09:11.507: INFO: Waiting for pod pod-secrets-8499323a-66db-44c3-9888-7e8b5b3469a8 to disappear
Jul  5 14:09:11.548: INFO: Pod pod-secrets-8499323a-66db-44c3-9888-7e8b5b3469a8 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 14:09:11.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4581" for this suite.
Jul  5 14:09:17.630: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 14:09:17.701: INFO: namespace secrets-4581 deletion completed in 6.149067902s

• [SLOW TEST:10.510 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 14:09:17.701: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jul  5 14:09:17.770: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a0f13bbe-21e5-495d-bd12-3dedd906c0e1" in namespace "projected-2115" to be "success or failure"
Jul  5 14:09:17.791: INFO: Pod "downwardapi-volume-a0f13bbe-21e5-495d-bd12-3dedd906c0e1": Phase="Pending", Reason="", readiness=false. Elapsed: 20.814036ms
Jul  5 14:09:19.795: INFO: Pod "downwardapi-volume-a0f13bbe-21e5-495d-bd12-3dedd906c0e1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024595487s
Jul  5 14:09:21.799: INFO: Pod "downwardapi-volume-a0f13bbe-21e5-495d-bd12-3dedd906c0e1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028746459s
STEP: Saw pod success
Jul  5 14:09:21.799: INFO: Pod "downwardapi-volume-a0f13bbe-21e5-495d-bd12-3dedd906c0e1" satisfied condition "success or failure"
Jul  5 14:09:21.803: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-a0f13bbe-21e5-495d-bd12-3dedd906c0e1 container client-container: 
STEP: delete the pod
Jul  5 14:09:21.885: INFO: Waiting for pod downwardapi-volume-a0f13bbe-21e5-495d-bd12-3dedd906c0e1 to disappear
Jul  5 14:09:21.889: INFO: Pod downwardapi-volume-a0f13bbe-21e5-495d-bd12-3dedd906c0e1 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 14:09:21.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2115" for this suite.
Jul  5 14:09:27.910: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 14:09:27.987: INFO: namespace projected-2115 deletion completed in 6.092331453s

• [SLOW TEST:10.286 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 14:09:27.987: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-f4100574-b869-48e7-98ff-610b46f5c206
STEP: Creating a pod to test consume secrets
Jul  5 14:09:28.156: INFO: Waiting up to 5m0s for pod "pod-secrets-656f706a-13dd-437e-ae67-402e5d8eddf8" in namespace "secrets-9347" to be "success or failure"
Jul  5 14:09:28.200: INFO: Pod "pod-secrets-656f706a-13dd-437e-ae67-402e5d8eddf8": Phase="Pending", Reason="", readiness=false. Elapsed: 44.05813ms
Jul  5 14:09:30.204: INFO: Pod "pod-secrets-656f706a-13dd-437e-ae67-402e5d8eddf8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048196307s
Jul  5 14:09:32.209: INFO: Pod "pod-secrets-656f706a-13dd-437e-ae67-402e5d8eddf8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052809522s
STEP: Saw pod success
Jul  5 14:09:32.209: INFO: Pod "pod-secrets-656f706a-13dd-437e-ae67-402e5d8eddf8" satisfied condition "success or failure"
Jul  5 14:09:32.212: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-656f706a-13dd-437e-ae67-402e5d8eddf8 container secret-volume-test: 
STEP: delete the pod
Jul  5 14:09:32.232: INFO: Waiting for pod pod-secrets-656f706a-13dd-437e-ae67-402e5d8eddf8 to disappear
Jul  5 14:09:32.254: INFO: Pod pod-secrets-656f706a-13dd-437e-ae67-402e5d8eddf8 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 14:09:32.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9347" for this suite.
Jul  5 14:09:38.276: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 14:09:38.409: INFO: namespace secrets-9347 deletion completed in 6.151729308s
STEP: Destroying namespace "secret-namespace-8539" for this suite.
Jul  5 14:09:44.423: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 14:09:44.525: INFO: namespace secret-namespace-8539 deletion completed in 6.116091231s

• [SLOW TEST:16.538 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 14:09:44.526: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Jul  5 14:09:44.593: INFO: Waiting up to 5m0s for pod "pod-b3ab126d-6306-407c-9f7c-7ec0c62dbd6f" in namespace "emptydir-2999" to be "success or failure"
Jul  5 14:09:44.596: INFO: Pod "pod-b3ab126d-6306-407c-9f7c-7ec0c62dbd6f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.775042ms
Jul  5 14:09:46.600: INFO: Pod "pod-b3ab126d-6306-407c-9f7c-7ec0c62dbd6f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006999713s
Jul  5 14:09:48.606: INFO: Pod "pod-b3ab126d-6306-407c-9f7c-7ec0c62dbd6f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012603274s
STEP: Saw pod success
Jul  5 14:09:48.606: INFO: Pod "pod-b3ab126d-6306-407c-9f7c-7ec0c62dbd6f" satisfied condition "success or failure"
Jul  5 14:09:48.609: INFO: Trying to get logs from node iruya-worker pod pod-b3ab126d-6306-407c-9f7c-7ec0c62dbd6f container test-container: 
STEP: delete the pod
Jul  5 14:09:48.627: INFO: Waiting for pod pod-b3ab126d-6306-407c-9f7c-7ec0c62dbd6f to disappear
Jul  5 14:09:48.638: INFO: Pod pod-b3ab126d-6306-407c-9f7c-7ec0c62dbd6f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 14:09:48.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2999" for this suite.
Jul  5 14:09:54.653: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 14:09:54.726: INFO: namespace emptydir-2999 deletion completed in 6.084345172s

• [SLOW TEST:10.201 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 14:09:54.727: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jul  5 14:09:54.774: INFO: Creating deployment "nginx-deployment"
Jul  5 14:09:54.788: INFO: Waiting for observed generation 1
Jul  5 14:09:56.798: INFO: Waiting for all required pods to come up
Jul  5 14:09:56.802: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Jul  5 14:10:06.814: INFO: Waiting for deployment "nginx-deployment" to complete
Jul  5 14:10:06.818: INFO: Updating deployment "nginx-deployment" with a non-existent image
Jul  5 14:10:06.823: INFO: Updating deployment nginx-deployment
Jul  5 14:10:06.823: INFO: Waiting for observed generation 2
Jul  5 14:10:08.900: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Jul  5 14:10:08.902: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Jul  5 14:10:08.904: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Jul  5 14:10:08.910: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Jul  5 14:10:08.910: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Jul  5 14:10:08.912: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Jul  5 14:10:08.915: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
Jul  5 14:10:08.915: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
Jul  5 14:10:08.919: INFO: Updating deployment nginx-deployment
Jul  5 14:10:08.919: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
Jul  5 14:10:09.111: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Jul  5 14:10:09.320: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Jul  5 14:10:12.016: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-9959,SelfLink:/apis/apps/v1/namespaces/deployment-9959/deployments/nginx-deployment,UID:58bd31ae-d4bd-466e-bef3-e1951ba60f99,ResourceVersion:245463,Generation:3,CreationTimestamp:2020-07-05 14:09:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Available False 2020-07-05 14:10:09 +0000 UTC 2020-07-05 14:10:09 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-07-05 14:10:09 +0000 UTC 2020-07-05 14:09:54 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.}],ReadyReplicas:8,CollisionCount:nil,},}

Jul  5 14:10:12.285: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-9959,SelfLink:/apis/apps/v1/namespaces/deployment-9959/replicasets/nginx-deployment-55fb7cb77f,UID:01466a89-fe30-43a4-8eca-c3d3a9fcf697,ResourceVersion:245460,Generation:3,CreationTimestamp:2020-07-05 14:10:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 58bd31ae-d4bd-466e-bef3-e1951ba60f99 0xc002c44257 0xc002c44258}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jul  5 14:10:12.285: INFO: All old ReplicaSets of Deployment "nginx-deployment":
Jul  5 14:10:12.285: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-9959,SelfLink:/apis/apps/v1/namespaces/deployment-9959/replicasets/nginx-deployment-7b8c6f4498,UID:ba3677a2-f2be-4669-b1b2-ff702f4789bd,ResourceVersion:245444,Generation:3,CreationTimestamp:2020-07-05 14:09:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 58bd31ae-d4bd-466e-bef3-e1951ba60f99 0xc002c44327 0xc002c44328}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
Jul  5 14:10:12.291: INFO: Pod "nginx-deployment-55fb7cb77f-2hgbs" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-2hgbs,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9959,SelfLink:/api/v1/namespaces/deployment-9959/pods/nginx-deployment-55fb7cb77f-2hgbs,UID:6137abb9-f778-49da-9d1f-6d733c0e1034,ResourceVersion:245515,Generation:0,CreationTimestamp:2020-07-05 14:10:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 01466a89-fe30-43a4-8eca-c3d3a9fcf697 0xc0030c69f7 0xc0030c69f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-djwwt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-djwwt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-djwwt true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0030c6a70} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0030c6a90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:10 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:09 +0000 UTC  }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-07-05 14:10:10 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  5 14:10:12.291: INFO: Pod "nginx-deployment-55fb7cb77f-4b8mf" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-4b8mf,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9959,SelfLink:/api/v1/namespaces/deployment-9959/pods/nginx-deployment-55fb7cb77f-4b8mf,UID:ffc20aab-9a28-4d61-a62d-6baf1cbd513f,ResourceVersion:245361,Generation:0,CreationTimestamp:2020-07-05 14:10:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 01466a89-fe30-43a4-8eca-c3d3a9fcf697 0xc0030c6b60 0xc0030c6b61}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-djwwt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-djwwt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-djwwt true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0030c6be0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0030c6c00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:06 +0000 UTC  }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-07-05 14:10:07 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  5 14:10:12.292: INFO: Pod "nginx-deployment-55fb7cb77f-4r8vt" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-4r8vt,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9959,SelfLink:/api/v1/namespaces/deployment-9959/pods/nginx-deployment-55fb7cb77f-4r8vt,UID:d1718400-bc71-4517-8c3b-adcedd52030d,ResourceVersion:245513,Generation:0,CreationTimestamp:2020-07-05 14:10:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 01466a89-fe30-43a4-8eca-c3d3a9fcf697 0xc0030c6cd0 0xc0030c6cd1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-djwwt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-djwwt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-djwwt true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0030c6d50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0030c6d70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:10 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:09 +0000 UTC  }],Message:,Reason:,HostIP:172.17.0.7,PodIP:,StartTime:2020-07-05 14:10:10 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  5 14:10:12.292: INFO: Pod "nginx-deployment-55fb7cb77f-68l9g" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-68l9g,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9959,SelfLink:/api/v1/namespaces/deployment-9959/pods/nginx-deployment-55fb7cb77f-68l9g,UID:ffa16c95-7192-4042-85f7-f34269264118,ResourceVersion:245366,Generation:0,CreationTimestamp:2020-07-05 14:10:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 01466a89-fe30-43a4-8eca-c3d3a9fcf697 0xc0030c6e40 0xc0030c6e41}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-djwwt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-djwwt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-djwwt true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0030c6ec0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0030c6ee0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:06 +0000 UTC  }],Message:,Reason:,HostIP:172.17.0.7,PodIP:,StartTime:2020-07-05 14:10:07 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  5 14:10:12.292: INFO: Pod "nginx-deployment-55fb7cb77f-7hqgw" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-7hqgw,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9959,SelfLink:/api/v1/namespaces/deployment-9959/pods/nginx-deployment-55fb7cb77f-7hqgw,UID:56d5b76f-f351-4a80-8b04-50d41dad0f5d,ResourceVersion:245453,Generation:0,CreationTimestamp:2020-07-05 14:10:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 01466a89-fe30-43a4-8eca-c3d3a9fcf697 0xc0030c6fb0 0xc0030c6fb1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-djwwt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-djwwt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-djwwt true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0030c7030} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0030c7050}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:09 +0000 UTC  }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-07-05 14:10:09 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  5 14:10:12.292: INFO: Pod "nginx-deployment-55fb7cb77f-d9g95" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-d9g95,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9959,SelfLink:/api/v1/namespaces/deployment-9959/pods/nginx-deployment-55fb7cb77f-d9g95,UID:be157582-868e-4349-9dad-6a56c20b2f5c,ResourceVersion:245480,Generation:0,CreationTimestamp:2020-07-05 14:10:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 01466a89-fe30-43a4-8eca-c3d3a9fcf697 0xc0030c7120 0xc0030c7121}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-djwwt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-djwwt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-djwwt true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0030c71a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0030c71c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:09 +0000 UTC  }],Message:,Reason:,HostIP:172.17.0.7,PodIP:,StartTime:2020-07-05 14:10:09 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  5 14:10:12.292: INFO: Pod "nginx-deployment-55fb7cb77f-dctwj" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-dctwj,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9959,SelfLink:/api/v1/namespaces/deployment-9959/pods/nginx-deployment-55fb7cb77f-dctwj,UID:fd7bfc30-fe75-4ed5-abc7-b07e5176064b,ResourceVersion:245354,Generation:0,CreationTimestamp:2020-07-05 14:10:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 01466a89-fe30-43a4-8eca-c3d3a9fcf697 0xc0030c7290 0xc0030c7291}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-djwwt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-djwwt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-djwwt true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0030c7310} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0030c7330}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:06 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:06 +0000 UTC  }],Message:,Reason:,HostIP:172.17.0.7,PodIP:,StartTime:2020-07-05 14:10:06 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  5 14:10:12.293: INFO: Pod "nginx-deployment-55fb7cb77f-mckl8" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-mckl8,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9959,SelfLink:/api/v1/namespaces/deployment-9959/pods/nginx-deployment-55fb7cb77f-mckl8,UID:a5e769da-032a-4497-86e8-adc7800e511b,ResourceVersion:245383,Generation:0,CreationTimestamp:2020-07-05 14:10:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 01466a89-fe30-43a4-8eca-c3d3a9fcf697 0xc0030c7400 0xc0030c7401}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-djwwt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-djwwt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-djwwt true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0030c7480} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0030c74a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:07 +0000 UTC  }],Message:,Reason:,HostIP:172.17.0.7,PodIP:,StartTime:2020-07-05 14:10:07 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  5 14:10:12.293: INFO: Pod "nginx-deployment-55fb7cb77f-nhzrq" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-nhzrq,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9959,SelfLink:/api/v1/namespaces/deployment-9959/pods/nginx-deployment-55fb7cb77f-nhzrq,UID:0ca02b3b-48c1-48a6-9114-7b2ba7a25222,ResourceVersion:245471,Generation:0,CreationTimestamp:2020-07-05 14:10:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 01466a89-fe30-43a4-8eca-c3d3a9fcf697 0xc0030c7570 0xc0030c7571}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-djwwt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-djwwt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-djwwt true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0030c75f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0030c7610}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:09 +0000 UTC  }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-07-05 14:10:09 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  5 14:10:12.293: INFO: Pod "nginx-deployment-55fb7cb77f-phgks" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-phgks,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9959,SelfLink:/api/v1/namespaces/deployment-9959/pods/nginx-deployment-55fb7cb77f-phgks,UID:3ba86c47-68a5-4070-8af4-b6781a22ff37,ResourceVersion:245510,Generation:0,CreationTimestamp:2020-07-05 14:10:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 01466a89-fe30-43a4-8eca-c3d3a9fcf697 0xc0030c76e0 0xc0030c76e1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-djwwt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-djwwt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-djwwt true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0030c7760} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0030c7780}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:09 +0000 UTC  }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-07-05 14:10:09 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  5 14:10:12.293: INFO: Pod "nginx-deployment-55fb7cb77f-pvxxw" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-pvxxw,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9959,SelfLink:/api/v1/namespaces/deployment-9959/pods/nginx-deployment-55fb7cb77f-pvxxw,UID:da65fad5-8409-4d93-9037-a1aaaecd932a,ResourceVersion:245522,Generation:0,CreationTimestamp:2020-07-05 14:10:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 01466a89-fe30-43a4-8eca-c3d3a9fcf697 0xc0030c7850 0xc0030c7851}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-djwwt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-djwwt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-djwwt true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0030c78d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0030c78f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:10 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:10 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:09 +0000 UTC  }],Message:,Reason:,HostIP:172.17.0.7,PodIP:,StartTime:2020-07-05 14:10:10 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  5 14:10:12.294: INFO: Pod "nginx-deployment-55fb7cb77f-q44bg" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-q44bg,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9959,SelfLink:/api/v1/namespaces/deployment-9959/pods/nginx-deployment-55fb7cb77f-q44bg,UID:04842dba-9f12-4d39-a075-1c4c67289c6b,ResourceVersion:245381,Generation:0,CreationTimestamp:2020-07-05 14:10:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 01466a89-fe30-43a4-8eca-c3d3a9fcf697 0xc0030c79c0 0xc0030c79c1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-djwwt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-djwwt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-djwwt true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0030c7a40} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0030c7a60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:07 +0000 UTC  }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-07-05 14:10:07 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  5 14:10:12.294: INFO: Pod "nginx-deployment-55fb7cb77f-r67g8" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-r67g8,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9959,SelfLink:/api/v1/namespaces/deployment-9959/pods/nginx-deployment-55fb7cb77f-r67g8,UID:2acf8971-1482-43a9-833e-9035f9caab15,ResourceVersion:245495,Generation:0,CreationTimestamp:2020-07-05 14:10:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 01466a89-fe30-43a4-8eca-c3d3a9fcf697 0xc0030c7b30 0xc0030c7b31}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-djwwt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-djwwt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-djwwt true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0030c7bb0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0030c7bd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:09 +0000 UTC  }],Message:,Reason:,HostIP:172.17.0.7,PodIP:,StartTime:2020-07-05 14:10:09 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  5 14:10:12.294: INFO: Pod "nginx-deployment-7b8c6f4498-2chh7" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-2chh7,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9959,SelfLink:/api/v1/namespaces/deployment-9959/pods/nginx-deployment-7b8c6f4498-2chh7,UID:6894f53f-890c-469c-9de6-3bbfff2705d3,ResourceVersion:245307,Generation:0,CreationTimestamp:2020-07-05 14:09:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ba3677a2-f2be-4669-b1b2-ff702f4789bd 0xc0030c7ca0 0xc0030c7ca1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-djwwt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-djwwt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-djwwt true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0030c7d10} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0030c7d30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:09:55 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:04 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:04 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:09:54 +0000 UTC  }],Message:,Reason:,HostIP:172.17.0.7,PodIP:10.244.1.164,StartTime:2020-07-05 14:09:55 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-07-05 14:10:03 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://70aff69fb0dae8b8ef666a92eae3cbba763c70457c872ab02ef093edb649309c}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  5 14:10:12.294: INFO: Pod "nginx-deployment-7b8c6f4498-8vvn2" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-8vvn2,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9959,SelfLink:/api/v1/namespaces/deployment-9959/pods/nginx-deployment-7b8c6f4498-8vvn2,UID:c7ec41f2-160f-45ed-ace1-8a0f43d20c4a,ResourceVersion:245325,Generation:0,CreationTimestamp:2020-07-05 14:09:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ba3677a2-f2be-4669-b1b2-ff702f4789bd 0xc0030c7e00 0xc0030c7e01}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-djwwt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-djwwt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-djwwt true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0030c7e70} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0030c7e90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:09:55 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:05 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:05 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:09:54 +0000 UTC  }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.222,StartTime:2020-07-05 14:09:55 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-07-05 14:10:05 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://2a73e81ca7f6ceac7afbc57837ba605ec7ae7be6b6e20cd69252c492f3564a83}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  5 14:10:12.294: INFO: Pod "nginx-deployment-7b8c6f4498-9frq8" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-9frq8,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9959,SelfLink:/api/v1/namespaces/deployment-9959/pods/nginx-deployment-7b8c6f4498-9frq8,UID:bbf954be-a8d0-4576-9bdd-b28d35e34d3c,ResourceVersion:245474,Generation:0,CreationTimestamp:2020-07-05 14:10:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ba3677a2-f2be-4669-b1b2-ff702f4789bd 0xc0030c7f60 0xc0030c7f61}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-djwwt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-djwwt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-djwwt true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0030c7fd0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0030c7ff0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:09 +0000 UTC  }],Message:,Reason:,HostIP:172.17.0.7,PodIP:,StartTime:2020-07-05 14:10:09 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  5 14:10:12.295: INFO: Pod "nginx-deployment-7b8c6f4498-ccg25" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-ccg25,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9959,SelfLink:/api/v1/namespaces/deployment-9959/pods/nginx-deployment-7b8c6f4498-ccg25,UID:e192baf3-fe38-499d-b683-263e65f185d0,ResourceVersion:245455,Generation:0,CreationTimestamp:2020-07-05 14:10:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ba3677a2-f2be-4669-b1b2-ff702f4789bd 0xc003b680b0 0xc003b680b1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-djwwt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-djwwt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-djwwt true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc003b68120} {node.kubernetes.io/unreachable Exists  NoExecute 0xc003b68140}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:09 +0000 UTC  }],Message:,Reason:,HostIP:172.17.0.7,PodIP:,StartTime:2020-07-05 14:10:09 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  5 14:10:12.295: INFO: Pod "nginx-deployment-7b8c6f4498-cjfjv" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-cjfjv,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9959,SelfLink:/api/v1/namespaces/deployment-9959/pods/nginx-deployment-7b8c6f4498-cjfjv,UID:ac5984c4-9479-4d02-b8fa-6747f646043c,ResourceVersion:245431,Generation:0,CreationTimestamp:2020-07-05 14:10:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ba3677a2-f2be-4669-b1b2-ff702f4789bd 0xc003b68200 0xc003b68201}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-djwwt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-djwwt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-djwwt true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc003b68270} {node.kubernetes.io/unreachable Exists  NoExecute 0xc003b68290}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:09 +0000 UTC  }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-07-05 14:10:09 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  5 14:10:12.295: INFO: Pod "nginx-deployment-7b8c6f4498-cjkx7" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-cjkx7,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9959,SelfLink:/api/v1/namespaces/deployment-9959/pods/nginx-deployment-7b8c6f4498-cjkx7,UID:a3bd4ef2-82b8-4710-959d-e5f739c505bd,ResourceVersion:245489,Generation:0,CreationTimestamp:2020-07-05 14:10:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ba3677a2-f2be-4669-b1b2-ff702f4789bd 0xc003b68350 0xc003b68351}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-djwwt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-djwwt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-djwwt true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc003b683c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc003b683e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:09 +0000 UTC  }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-07-05 14:10:09 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  5 14:10:12.295: INFO: Pod "nginx-deployment-7b8c6f4498-f9llf" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-f9llf,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9959,SelfLink:/api/v1/namespaces/deployment-9959/pods/nginx-deployment-7b8c6f4498-f9llf,UID:0dec9eb2-8fd5-4907-b370-59c83beac3f4,ResourceVersion:245303,Generation:0,CreationTimestamp:2020-07-05 14:09:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ba3677a2-f2be-4669-b1b2-ff702f4789bd 0xc003b684a0 0xc003b684a1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-djwwt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-djwwt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-djwwt true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc003b68510} {node.kubernetes.io/unreachable Exists  NoExecute 0xc003b68530}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:09:54 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:03 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:03 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:09:54 +0000 UTC  }],Message:,Reason:,HostIP:172.17.0.7,PodIP:10.244.1.163,StartTime:2020-07-05 14:09:54 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-07-05 14:10:03 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://d5de95fe7f0838e1eb5fa8e7972ef08adeed32c5746ce2733234de1b031af2fd}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  5 14:10:12.295: INFO: Pod "nginx-deployment-7b8c6f4498-gt95r" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-gt95r,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9959,SelfLink:/api/v1/namespaces/deployment-9959/pods/nginx-deployment-7b8c6f4498-gt95r,UID:0751481e-4c97-4f60-8e83-3dcbfe2db8ff,ResourceVersion:245493,Generation:0,CreationTimestamp:2020-07-05 14:10:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ba3677a2-f2be-4669-b1b2-ff702f4789bd 0xc003b68600 0xc003b68601}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-djwwt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-djwwt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-djwwt true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc003b68670} {node.kubernetes.io/unreachable Exists  NoExecute 0xc003b68690}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:09 +0000 UTC  }],Message:,Reason:,HostIP:172.17.0.7,PodIP:,StartTime:2020-07-05 14:10:09 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  5 14:10:12.296: INFO: Pod "nginx-deployment-7b8c6f4498-hwjxw" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-hwjxw,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9959,SelfLink:/api/v1/namespaces/deployment-9959/pods/nginx-deployment-7b8c6f4498-hwjxw,UID:c780f951-3dcb-4206-9c34-3455b1f251bb,ResourceVersion:245282,Generation:0,CreationTimestamp:2020-07-05 14:09:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ba3677a2-f2be-4669-b1b2-ff702f4789bd 0xc003b68750 0xc003b68751}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-djwwt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-djwwt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-djwwt true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc003b687c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc003b687e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:09:54 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:02 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:02 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:09:54 +0000 UTC  }],Message:,Reason:,HostIP:172.17.0.7,PodIP:10.244.1.161,StartTime:2020-07-05 14:09:54 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-07-05 14:10:01 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://7c864e9ebc26bf60a1163974e1a29c7963fe7564dbf2781bebb8cc2c783a03e5}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  5 14:10:12.296: INFO: Pod "nginx-deployment-7b8c6f4498-kmd8x" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-kmd8x,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9959,SelfLink:/api/v1/namespaces/deployment-9959/pods/nginx-deployment-7b8c6f4498-kmd8x,UID:bc5de65d-876a-4cf0-969d-41679fd28cc8,ResourceVersion:245464,Generation:0,CreationTimestamp:2020-07-05 14:10:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ba3677a2-f2be-4669-b1b2-ff702f4789bd 0xc003b688b0 0xc003b688b1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-djwwt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-djwwt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-djwwt true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc003b68920} {node.kubernetes.io/unreachable Exists  NoExecute 0xc003b68940}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:09 +0000 UTC  }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-07-05 14:10:09 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  5 14:10:12.296: INFO: Pod "nginx-deployment-7b8c6f4498-ldqzc" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-ldqzc,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9959,SelfLink:/api/v1/namespaces/deployment-9959/pods/nginx-deployment-7b8c6f4498-ldqzc,UID:492365f0-b427-423b-96a6-0cb517bfdd5b,ResourceVersion:245497,Generation:0,CreationTimestamp:2020-07-05 14:10:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ba3677a2-f2be-4669-b1b2-ff702f4789bd 0xc003b68a00 0xc003b68a01}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-djwwt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-djwwt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-djwwt true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc003b68a70} {node.kubernetes.io/unreachable Exists  NoExecute 0xc003b68a90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:09 +0000 UTC  }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-07-05 14:10:09 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  5 14:10:12.296: INFO: Pod "nginx-deployment-7b8c6f4498-mrc62" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-mrc62,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9959,SelfLink:/api/v1/namespaces/deployment-9959/pods/nginx-deployment-7b8c6f4498-mrc62,UID:8f0be68d-ead6-44c4-84ac-ecffea24508b,ResourceVersion:245290,Generation:0,CreationTimestamp:2020-07-05 14:09:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ba3677a2-f2be-4669-b1b2-ff702f4789bd 0xc003b68b50 0xc003b68b51}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-djwwt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-djwwt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-djwwt true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc003b68bc0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc003b68be0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:09:54 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:02 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:02 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:09:54 +0000 UTC  }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.220,StartTime:2020-07-05 14:09:54 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-07-05 14:10:01 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://9e6c14cd506f6178cc3c84b95734e3a67b3d0c9293613fc02f464ce9228147b7}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  5 14:10:12.296: INFO: Pod "nginx-deployment-7b8c6f4498-ntzsk" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-ntzsk,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9959,SelfLink:/api/v1/namespaces/deployment-9959/pods/nginx-deployment-7b8c6f4498-ntzsk,UID:639062a8-0c30-440d-9e3e-310b29044006,ResourceVersion:245298,Generation:0,CreationTimestamp:2020-07-05 14:09:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ba3677a2-f2be-4669-b1b2-ff702f4789bd 0xc003b68cb0 0xc003b68cb1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-djwwt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-djwwt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-djwwt true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc003b68d20} {node.kubernetes.io/unreachable Exists  NoExecute 0xc003b68d40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:09:54 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:03 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:03 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:09:54 +0000 UTC  }],Message:,Reason:,HostIP:172.17.0.7,PodIP:10.244.1.162,StartTime:2020-07-05 14:09:54 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-07-05 14:10:03 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://94d728a8d4add7fb76b65bc73c61c53626b1002d1ceb673ce3634ebb9d5c54fd}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  5 14:10:12.297: INFO: Pod "nginx-deployment-7b8c6f4498-qsnx7" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-qsnx7,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9959,SelfLink:/api/v1/namespaces/deployment-9959/pods/nginx-deployment-7b8c6f4498-qsnx7,UID:e103b616-08a4-464e-83b6-6e4e6b781fbf,ResourceVersion:245486,Generation:0,CreationTimestamp:2020-07-05 14:10:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ba3677a2-f2be-4669-b1b2-ff702f4789bd 0xc003b68e10 0xc003b68e11}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-djwwt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-djwwt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-djwwt true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc003b68e80} {node.kubernetes.io/unreachable Exists  NoExecute 0xc003b68ea0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:09 +0000 UTC  }],Message:,Reason:,HostIP:172.17.0.7,PodIP:,StartTime:2020-07-05 14:10:09 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  5 14:10:12.297: INFO: Pod "nginx-deployment-7b8c6f4498-r7cx8" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-r7cx8,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9959,SelfLink:/api/v1/namespaces/deployment-9959/pods/nginx-deployment-7b8c6f4498-r7cx8,UID:16a3cc49-de73-4463-a026-2378698704bd,ResourceVersion:245284,Generation:0,CreationTimestamp:2020-07-05 14:09:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ba3677a2-f2be-4669-b1b2-ff702f4789bd 0xc003b68f60 0xc003b68f61}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-djwwt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-djwwt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-djwwt true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc003b68fd0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc003b68ff0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:09:54 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:02 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:02 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:09:54 +0000 UTC  }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.219,StartTime:2020-07-05 14:09:54 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-07-05 14:10:01 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://c8b55376d0c2bb82ff778f8a63a6af53b9b87b2c3514b7bc53c8ca01dd45762f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  5 14:10:12.297: INFO: Pod "nginx-deployment-7b8c6f4498-s6wh5" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-s6wh5,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9959,SelfLink:/api/v1/namespaces/deployment-9959/pods/nginx-deployment-7b8c6f4498-s6wh5,UID:41fa1f8c-2e30-43ee-830f-2b88fbf68004,ResourceVersion:245468,Generation:0,CreationTimestamp:2020-07-05 14:10:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ba3677a2-f2be-4669-b1b2-ff702f4789bd 0xc003b690c0 0xc003b690c1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-djwwt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-djwwt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-djwwt true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc003b69130} {node.kubernetes.io/unreachable Exists  NoExecute 0xc003b69150}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:09 +0000 UTC  }],Message:,Reason:,HostIP:172.17.0.7,PodIP:,StartTime:2020-07-05 14:10:09 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  5 14:10:12.297: INFO: Pod "nginx-deployment-7b8c6f4498-sxrzr" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-sxrzr,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9959,SelfLink:/api/v1/namespaces/deployment-9959/pods/nginx-deployment-7b8c6f4498-sxrzr,UID:81e461e4-1942-4d1f-9cbc-93c700cb8ee8,ResourceVersion:245477,Generation:0,CreationTimestamp:2020-07-05 14:10:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ba3677a2-f2be-4669-b1b2-ff702f4789bd 0xc003b69210 0xc003b69211}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-djwwt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-djwwt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-djwwt true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc003b69280} {node.kubernetes.io/unreachable Exists  NoExecute 0xc003b692a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:09 +0000 UTC  }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-07-05 14:10:09 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  5 14:10:12.298: INFO: Pod "nginx-deployment-7b8c6f4498-tw72x" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-tw72x,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9959,SelfLink:/api/v1/namespaces/deployment-9959/pods/nginx-deployment-7b8c6f4498-tw72x,UID:ad1d01b2-3dac-48e8-a6d4-639aa9b86dde,ResourceVersion:245323,Generation:0,CreationTimestamp:2020-07-05 14:09:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ba3677a2-f2be-4669-b1b2-ff702f4789bd 0xc003b69360 0xc003b69361}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-djwwt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-djwwt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-djwwt true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc003b693d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc003b693f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:09:55 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:05 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:05 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:09:54 +0000 UTC  }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.223,StartTime:2020-07-05 14:09:55 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-07-05 14:10:04 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://9a56c7a649e5a8765ef0c6b218a6fc1b878d31720130a26e8c53a4f20131551f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  5 14:10:12.298: INFO: Pod "nginx-deployment-7b8c6f4498-vprts" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-vprts,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9959,SelfLink:/api/v1/namespaces/deployment-9959/pods/nginx-deployment-7b8c6f4498-vprts,UID:95c02175-7f69-403d-b29f-a10bd3b39a9b,ResourceVersion:245482,Generation:0,CreationTimestamp:2020-07-05 14:10:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ba3677a2-f2be-4669-b1b2-ff702f4789bd 0xc003b694c0 0xc003b694c1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-djwwt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-djwwt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-djwwt true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc003b69530} {node.kubernetes.io/unreachable Exists  NoExecute 0xc003b69550}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:09 +0000 UTC  }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-07-05 14:10:09 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jul  5 14:10:12.298: INFO: Pod "nginx-deployment-7b8c6f4498-z59x4" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-z59x4,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9959,SelfLink:/api/v1/namespaces/deployment-9959/pods/nginx-deployment-7b8c6f4498-z59x4,UID:f81904a8-7296-4431-8c9b-a4e1dca77dc9,ResourceVersion:245507,Generation:0,CreationTimestamp:2020-07-05 14:10:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 ba3677a2-f2be-4669-b1b2-ff702f4789bd 0xc003b69610 0xc003b69611}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-djwwt {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-djwwt,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-djwwt true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc003b69680} {node.kubernetes.io/unreachable Exists  NoExecute 0xc003b696a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:09 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:10:09 +0000 UTC  }],Message:,Reason:,HostIP:172.17.0.7,PodIP:,StartTime:2020-07-05 14:10:09 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 14:10:12.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-9959" for this suite.
Jul  5 14:10:33.039: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 14:10:33.113: INFO: namespace deployment-9959 deletion completed in 20.37616309s

• [SLOW TEST:38.386 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 14:10:33.114: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jul  5 14:10:33.473: INFO: Waiting up to 5m0s for pod "pod-f4725533-7930-47a4-8650-8ca58f50bac4" in namespace "emptydir-474" to be "success or failure"
Jul  5 14:10:33.478: INFO: Pod "pod-f4725533-7930-47a4-8650-8ca58f50bac4": Phase="Pending", Reason="", readiness=false. Elapsed: 5.068188ms
Jul  5 14:10:35.517: INFO: Pod "pod-f4725533-7930-47a4-8650-8ca58f50bac4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044351386s
Jul  5 14:10:37.522: INFO: Pod "pod-f4725533-7930-47a4-8650-8ca58f50bac4": Phase="Running", Reason="", readiness=true. Elapsed: 4.049393864s
Jul  5 14:10:39.527: INFO: Pod "pod-f4725533-7930-47a4-8650-8ca58f50bac4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.053509374s
STEP: Saw pod success
Jul  5 14:10:39.527: INFO: Pod "pod-f4725533-7930-47a4-8650-8ca58f50bac4" satisfied condition "success or failure"
Jul  5 14:10:39.529: INFO: Trying to get logs from node iruya-worker pod pod-f4725533-7930-47a4-8650-8ca58f50bac4 container test-container: 
STEP: delete the pod
Jul  5 14:10:39.569: INFO: Waiting for pod pod-f4725533-7930-47a4-8650-8ca58f50bac4 to disappear
Jul  5 14:10:39.574: INFO: Pod pod-f4725533-7930-47a4-8650-8ca58f50bac4 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 14:10:39.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-474" for this suite.
Jul  5 14:10:45.614: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 14:10:45.689: INFO: namespace emptydir-474 deletion completed in 6.110689352s

• [SLOW TEST:12.575 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 14:10:45.689: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-d8685a48-4887-4716-aaa1-c5750ebed1f2
STEP: Creating secret with name s-test-opt-upd-e4bf37c0-49b5-409d-939f-dcf723025f99
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-d8685a48-4887-4716-aaa1-c5750ebed1f2
STEP: Updating secret s-test-opt-upd-e4bf37c0-49b5-409d-939f-dcf723025f99
STEP: Creating secret with name s-test-opt-create-cf234c7c-db18-4def-adef-52d3cc5db8b1
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 14:12:18.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9183" for this suite.
Jul  5 14:12:40.384: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 14:12:40.465: INFO: namespace projected-9183 deletion completed in 22.125430887s

• [SLOW TEST:114.776 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 14:12:40.466: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Jul  5 14:12:40.535: INFO: Pod name pod-release: Found 0 pods out of 1
Jul  5 14:12:45.540: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 14:12:46.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-1209" for this suite.
Jul  5 14:12:52.720: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 14:12:52.785: INFO: namespace replication-controller-1209 deletion completed in 6.193440077s

• [SLOW TEST:12.319 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 14:12:52.785: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jul  5 14:12:52.832: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-5580'
Jul  5 14:12:55.684: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jul  5 14:12:55.684: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
Jul  5 14:12:55.693: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Jul  5 14:12:55.728: INFO: scanned /root for discovery docs: 
Jul  5 14:12:55.728: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-5580'
Jul  5 14:13:11.696: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jul  5 14:13:11.696: INFO: stdout: "Created e2e-test-nginx-rc-a70c0ed8f4be3be9c03469276f43c443\nScaling up e2e-test-nginx-rc-a70c0ed8f4be3be9c03469276f43c443 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-a70c0ed8f4be3be9c03469276f43c443 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-a70c0ed8f4be3be9c03469276f43c443 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Jul  5 14:13:11.696: INFO: stdout: "Created e2e-test-nginx-rc-a70c0ed8f4be3be9c03469276f43c443\nScaling up e2e-test-nginx-rc-a70c0ed8f4be3be9c03469276f43c443 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-a70c0ed8f4be3be9c03469276f43c443 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-a70c0ed8f4be3be9c03469276f43c443 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Jul  5 14:13:11.696: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-5580'
Jul  5 14:13:11.788: INFO: stderr: ""
Jul  5 14:13:11.788: INFO: stdout: "e2e-test-nginx-rc-a70c0ed8f4be3be9c03469276f43c443-6nw58 "
Jul  5 14:13:11.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-a70c0ed8f4be3be9c03469276f43c443-6nw58 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5580'
Jul  5 14:13:11.887: INFO: stderr: ""
Jul  5 14:13:11.887: INFO: stdout: "true"
Jul  5 14:13:11.887: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-a70c0ed8f4be3be9c03469276f43c443-6nw58 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5580'
Jul  5 14:13:11.980: INFO: stderr: ""
Jul  5 14:13:11.980: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Jul  5 14:13:11.980: INFO: e2e-test-nginx-rc-a70c0ed8f4be3be9c03469276f43c443-6nw58 is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522
Jul  5 14:13:11.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-5580'
Jul  5 14:13:12.101: INFO: stderr: ""
Jul  5 14:13:12.101: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 14:13:12.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5580" for this suite.
Jul  5 14:13:34.155: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 14:13:34.239: INFO: namespace kubectl-5580 deletion completed in 22.116335113s

• [SLOW TEST:41.453 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 14:13:34.239: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jul  5 14:13:34.274: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 14:13:35.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-1129" for this suite.
Jul  5 14:13:41.360: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 14:13:41.459: INFO: namespace custom-resource-definition-1129 deletion completed in 6.111921591s

• [SLOW TEST:7.220 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 14:13:41.459: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0705 14:14:12.075263       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jul  5 14:14:12.075: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 14:14:12.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-6808" for this suite.
Jul  5 14:14:18.092: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 14:14:18.165: INFO: namespace gc-6808 deletion completed in 6.086438659s

• [SLOW TEST:36.706 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 14:14:18.165: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jul  5 14:14:18.342: INFO: Waiting up to 5m0s for pod "pod-68177804-467c-4225-ad65-cb5dd7b0904e" in namespace "emptydir-7081" to be "success or failure"
Jul  5 14:14:18.351: INFO: Pod "pod-68177804-467c-4225-ad65-cb5dd7b0904e": Phase="Pending", Reason="", readiness=false. Elapsed: 9.346848ms
Jul  5 14:14:20.355: INFO: Pod "pod-68177804-467c-4225-ad65-cb5dd7b0904e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013190363s
Jul  5 14:14:22.359: INFO: Pod "pod-68177804-467c-4225-ad65-cb5dd7b0904e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017317539s
STEP: Saw pod success
Jul  5 14:14:22.359: INFO: Pod "pod-68177804-467c-4225-ad65-cb5dd7b0904e" satisfied condition "success or failure"
Jul  5 14:14:22.363: INFO: Trying to get logs from node iruya-worker pod pod-68177804-467c-4225-ad65-cb5dd7b0904e container test-container: 
STEP: delete the pod
Jul  5 14:14:22.383: INFO: Waiting for pod pod-68177804-467c-4225-ad65-cb5dd7b0904e to disappear
Jul  5 14:14:22.387: INFO: Pod pod-68177804-467c-4225-ad65-cb5dd7b0904e no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 14:14:22.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7081" for this suite.
Jul  5 14:14:28.412: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 14:14:28.504: INFO: namespace emptydir-7081 deletion completed in 6.114331989s

• [SLOW TEST:10.339 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 14:14:28.504: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Jul  5 14:14:28.541: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jul  5 14:14:28.567: INFO: Waiting for terminating namespaces to be deleted...
Jul  5 14:14:28.570: INFO: 
Logging pods the kubelet thinks is on node iruya-worker before test
Jul  5 14:14:28.575: INFO: kube-proxy-nxrg9 from kube-system started at 2020-07-04 09:21:50 +0000 UTC (1 container statuses recorded)
Jul  5 14:14:28.575: INFO: 	Container kube-proxy ready: true, restart count 0
Jul  5 14:14:28.575: INFO: kindnet-469kb from kube-system started at 2020-07-04 09:21:50 +0000 UTC (1 container statuses recorded)
Jul  5 14:14:28.575: INFO: 	Container kindnet-cni ready: true, restart count 0
Jul  5 14:14:28.575: INFO: 
Logging pods the kubelet thinks is on node iruya-worker2 before test
Jul  5 14:14:28.581: INFO: kube-proxy-wvch7 from kube-system started at 2020-07-04 09:21:50 +0000 UTC (1 container statuses recorded)
Jul  5 14:14:28.581: INFO: 	Container kube-proxy ready: true, restart count 0
Jul  5 14:14:28.581: INFO: kindnet-gj45r from kube-system started at 2020-07-04 09:21:50 +0000 UTC (1 container statuses recorded)
Jul  5 14:14:28.581: INFO: 	Container kindnet-cni ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: verifying the node has the label node iruya-worker
STEP: verifying the node has the label node iruya-worker2
Jul  5 14:14:28.658: INFO: Pod kindnet-469kb requesting resource cpu=100m on Node iruya-worker
Jul  5 14:14:28.658: INFO: Pod kindnet-gj45r requesting resource cpu=100m on Node iruya-worker2
Jul  5 14:14:28.658: INFO: Pod kube-proxy-nxrg9 requesting resource cpu=0m on Node iruya-worker
Jul  5 14:14:28.658: INFO: Pod kube-proxy-wvch7 requesting resource cpu=0m on Node iruya-worker2
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-503b466d-1046-4ce5-b677-06d7d1b353e9.161ee0c839a72202], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8461/filler-pod-503b466d-1046-4ce5-b677-06d7d1b353e9 to iruya-worker]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-503b466d-1046-4ce5-b677-06d7d1b353e9.161ee0c8865dd29e], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-503b466d-1046-4ce5-b677-06d7d1b353e9.161ee0c8fc748902], Reason = [Created], Message = [Created container filler-pod-503b466d-1046-4ce5-b677-06d7d1b353e9]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-503b466d-1046-4ce5-b677-06d7d1b353e9.161ee0c90b8945a3], Reason = [Started], Message = [Started container filler-pod-503b466d-1046-4ce5-b677-06d7d1b353e9]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-c3f268ec-9626-45e5-8831-fc5ef0addde1.161ee0c83dd7e4ff], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8461/filler-pod-c3f268ec-9626-45e5-8831-fc5ef0addde1 to iruya-worker2]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-c3f268ec-9626-45e5-8831-fc5ef0addde1.161ee0c8ad8beca6], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-c3f268ec-9626-45e5-8831-fc5ef0addde1.161ee0c902f7ea01], Reason = [Created], Message = [Created container filler-pod-c3f268ec-9626-45e5-8831-fc5ef0addde1]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-c3f268ec-9626-45e5-8831-fc5ef0addde1.161ee0c91253cf95], Reason = [Started], Message = [Started container filler-pod-c3f268ec-9626-45e5-8831-fc5ef0addde1]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.161ee0c9a4d3f25e], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.]
STEP: removing the label node off the node iruya-worker
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node iruya-worker2
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 14:14:35.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-8461" for this suite.
Jul  5 14:14:41.928: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 14:14:42.252: INFO: namespace sched-pred-8461 deletion completed in 6.337073357s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:13.748 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 14:14:42.253: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: delete a job
STEP: deleting Job.batch foo in namespace job-217, will wait for the garbage collector to delete the pods
Jul  5 14:14:46.383: INFO: Deleting Job.batch foo took: 7.743087ms
Jul  5 14:14:46.683: INFO: Terminating Job.batch foo pods took: 300.249733ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 14:15:25.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-217" for this suite.
Jul  5 14:15:32.027: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 14:15:32.098: INFO: namespace job-217 deletion completed in 6.106831675s

• [SLOW TEST:49.846 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 14:15:32.099: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jul  5 14:15:32.216: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4fab8e02-0315-4417-8633-a668200cd5dc" in namespace "projected-605" to be "success or failure"
Jul  5 14:15:32.234: INFO: Pod "downwardapi-volume-4fab8e02-0315-4417-8633-a668200cd5dc": Phase="Pending", Reason="", readiness=false. Elapsed: 18.309261ms
Jul  5 14:15:34.240: INFO: Pod "downwardapi-volume-4fab8e02-0315-4417-8633-a668200cd5dc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023630495s
Jul  5 14:15:36.244: INFO: Pod "downwardapi-volume-4fab8e02-0315-4417-8633-a668200cd5dc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027692842s
STEP: Saw pod success
Jul  5 14:15:36.244: INFO: Pod "downwardapi-volume-4fab8e02-0315-4417-8633-a668200cd5dc" satisfied condition "success or failure"
Jul  5 14:15:36.246: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-4fab8e02-0315-4417-8633-a668200cd5dc container client-container: 
STEP: delete the pod
Jul  5 14:15:36.265: INFO: Waiting for pod downwardapi-volume-4fab8e02-0315-4417-8633-a668200cd5dc to disappear
Jul  5 14:15:36.319: INFO: Pod downwardapi-volume-4fab8e02-0315-4417-8633-a668200cd5dc no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 14:15:36.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-605" for this suite.
Jul  5 14:15:42.340: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 14:15:42.423: INFO: namespace projected-605 deletion completed in 6.100514812s

• [SLOW TEST:10.325 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 14:15:42.424: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jul  5 14:15:42.490: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Jul  5 14:15:42.509: INFO: Pod name sample-pod: Found 0 pods out of 1
Jul  5 14:15:47.513: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jul  5 14:15:47.513: INFO: Creating deployment "test-rolling-update-deployment"
Jul  5 14:15:47.518: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Jul  5 14:15:47.538: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Jul  5 14:15:49.546: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Jul  5 14:15:49.550: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729555347, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729555347, loc:(*time.Location)(0x7eb18c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63729555347, loc:(*time.Location)(0x7eb18c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63729555347, loc:(*time.Location)(0x7eb18c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul  5 14:15:51.554: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Jul  5 14:15:51.564: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-2221,SelfLink:/apis/apps/v1/namespaces/deployment-2221/deployments/test-rolling-update-deployment,UID:cf614071-4c98-4789-b9de-d8ca0beb8fed,ResourceVersion:246843,Generation:1,CreationTimestamp:2020-07-05 14:15:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-07-05 14:15:47 +0000 UTC 2020-07-05 14:15:47 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-07-05 14:15:51 +0000 UTC 2020-07-05 14:15:47 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Jul  5 14:15:51.567: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-2221,SelfLink:/apis/apps/v1/namespaces/deployment-2221/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:b251b602-f63a-459e-9677-aa7a6fa38f69,ResourceVersion:246832,Generation:1,CreationTimestamp:2020-07-05 14:15:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment cf614071-4c98-4789-b9de-d8ca0beb8fed 0xc001d66db7 0xc001d66db8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jul  5 14:15:51.567: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Jul  5 14:15:51.567: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-2221,SelfLink:/apis/apps/v1/namespaces/deployment-2221/replicasets/test-rolling-update-controller,UID:8b6b15c6-f133-4844-a084-054505b57374,ResourceVersion:246841,Generation:2,CreationTimestamp:2020-07-05 14:15:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment cf614071-4c98-4789-b9de-d8ca0beb8fed 0xc001d66cd7 0xc001d66cd8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jul  5 14:15:51.571: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-rc4sl" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-rc4sl,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-2221,SelfLink:/api/v1/namespaces/deployment-2221/pods/test-rolling-update-deployment-79f6b9d75c-rc4sl,UID:2a69b320-486b-45bc-9d9d-caa8398a6569,ResourceVersion:246831,Generation:0,CreationTimestamp:2020-07-05 14:15:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c b251b602-f63a-459e-9677-aa7a6fa38f69 0xc001d67697 0xc001d67698}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-dcgf2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-dcgf2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-dcgf2 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001d67710} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001d67730}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:15:47 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:15:51 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:15:51 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:15:47 +0000 UTC  }],Message:,Reason:,HostIP:172.17.0.7,PodIP:10.244.1.185,StartTime:2020-07-05 14:15:47 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-07-05 14:15:50 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://3870c518389e4e3e49d2a30e23bd99b7e34d9cdd0008274e1703be3e2a7a2807}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 14:15:51.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-2221" for this suite.
Jul  5 14:15:57.818: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 14:15:57.894: INFO: namespace deployment-2221 deletion completed in 6.319215682s

• [SLOW TEST:15.470 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 14:15:57.894: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jul  5 14:16:02.084: INFO: Waiting up to 5m0s for pod "client-envvars-2d425b1d-057f-4b35-8525-1c091d78ef6e" in namespace "pods-9154" to be "success or failure"
Jul  5 14:16:02.109: INFO: Pod "client-envvars-2d425b1d-057f-4b35-8525-1c091d78ef6e": Phase="Pending", Reason="", readiness=false. Elapsed: 25.253487ms
Jul  5 14:16:04.113: INFO: Pod "client-envvars-2d425b1d-057f-4b35-8525-1c091d78ef6e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029351624s
Jul  5 14:16:06.117: INFO: Pod "client-envvars-2d425b1d-057f-4b35-8525-1c091d78ef6e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03286437s
STEP: Saw pod success
Jul  5 14:16:06.117: INFO: Pod "client-envvars-2d425b1d-057f-4b35-8525-1c091d78ef6e" satisfied condition "success or failure"
Jul  5 14:16:06.120: INFO: Trying to get logs from node iruya-worker pod client-envvars-2d425b1d-057f-4b35-8525-1c091d78ef6e container env3cont: 
STEP: delete the pod
Jul  5 14:16:06.374: INFO: Waiting for pod client-envvars-2d425b1d-057f-4b35-8525-1c091d78ef6e to disappear
Jul  5 14:16:06.413: INFO: Pod client-envvars-2d425b1d-057f-4b35-8525-1c091d78ef6e no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 14:16:06.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9154" for this suite.
Jul  5 14:16:46.434: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 14:16:46.512: INFO: namespace pods-9154 deletion completed in 40.095356215s

• [SLOW TEST:48.618 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 14:16:46.513: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap that has name configmap-test-emptyKey-610d45ad-194f-47dc-b169-2007921bde61
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 14:16:46.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1473" for this suite.
Jul  5 14:16:52.592: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 14:16:52.692: INFO: namespace configmap-1473 deletion completed in 6.11450598s

• [SLOW TEST:6.179 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 14:16:52.692: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating cluster-info
Jul  5 14:16:52.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Jul  5 14:16:52.839: INFO: stderr: ""
Jul  5 14:16:52.839: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32780\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32780/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 14:16:52.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3050" for this suite.
Jul  5 14:16:58.855: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 14:16:58.936: INFO: namespace kubectl-3050 deletion completed in 6.094064647s

• [SLOW TEST:6.244 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 14:16:58.937: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-3578/configmap-test-25aa0ad8-b0ae-4d3a-a187-bfbbcacb8272
STEP: Creating a pod to test consume configMaps
Jul  5 14:16:59.020: INFO: Waiting up to 5m0s for pod "pod-configmaps-2a8ab660-e498-4847-9045-a72cbb4e9223" in namespace "configmap-3578" to be "success or failure"
Jul  5 14:16:59.096: INFO: Pod "pod-configmaps-2a8ab660-e498-4847-9045-a72cbb4e9223": Phase="Pending", Reason="", readiness=false. Elapsed: 76.197327ms
Jul  5 14:17:01.100: INFO: Pod "pod-configmaps-2a8ab660-e498-4847-9045-a72cbb4e9223": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07986305s
Jul  5 14:17:03.104: INFO: Pod "pod-configmaps-2a8ab660-e498-4847-9045-a72cbb4e9223": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.084110159s
STEP: Saw pod success
Jul  5 14:17:03.104: INFO: Pod "pod-configmaps-2a8ab660-e498-4847-9045-a72cbb4e9223" satisfied condition "success or failure"
Jul  5 14:17:03.107: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-2a8ab660-e498-4847-9045-a72cbb4e9223 container env-test: 
STEP: delete the pod
Jul  5 14:17:03.145: INFO: Waiting for pod pod-configmaps-2a8ab660-e498-4847-9045-a72cbb4e9223 to disappear
Jul  5 14:17:03.161: INFO: Pod pod-configmaps-2a8ab660-e498-4847-9045-a72cbb4e9223 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 14:17:03.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3578" for this suite.
Jul  5 14:17:09.189: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 14:17:09.271: INFO: namespace configmap-3578 deletion completed in 6.106454713s

• [SLOW TEST:10.334 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 14:17:09.272: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-08df1f78-97ef-4ccc-b398-7cbbaeb93f66
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-08df1f78-97ef-4ccc-b398-7cbbaeb93f66
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 14:17:15.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1162" for this suite.
Jul  5 14:17:37.467: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 14:17:37.544: INFO: namespace configmap-1162 deletion completed in 22.092333105s

• [SLOW TEST:28.271 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 14:17:37.544: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 14:18:07.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-621" for this suite.
Jul  5 14:18:13.565: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 14:18:13.644: INFO: namespace container-runtime-621 deletion completed in 6.127938693s

• [SLOW TEST:36.100 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 14:18:13.645: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Jul  5 14:18:13.700: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 14:18:19.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-5866" for this suite.
Jul  5 14:18:25.617: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 14:18:25.695: INFO: namespace init-container-5866 deletion completed in 6.094675503s

• [SLOW TEST:12.050 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 14:18:25.696: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 14:18:31.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-8167" for this suite.
Jul  5 14:18:37.355: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 14:18:37.452: INFO: namespace watch-8167 deletion completed in 6.186927082s

• [SLOW TEST:11.756 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 14:18:37.453: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jul  5 14:18:37.546: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bc7d7377-2e53-4c4c-8cb4-91033681c311" in namespace "downward-api-9832" to be "success or failure"
Jul  5 14:18:37.579: INFO: Pod "downwardapi-volume-bc7d7377-2e53-4c4c-8cb4-91033681c311": Phase="Pending", Reason="", readiness=false. Elapsed: 32.444842ms
Jul  5 14:18:39.583: INFO: Pod "downwardapi-volume-bc7d7377-2e53-4c4c-8cb4-91033681c311": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036768654s
Jul  5 14:18:41.587: INFO: Pod "downwardapi-volume-bc7d7377-2e53-4c4c-8cb4-91033681c311": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040760246s
STEP: Saw pod success
Jul  5 14:18:41.587: INFO: Pod "downwardapi-volume-bc7d7377-2e53-4c4c-8cb4-91033681c311" satisfied condition "success or failure"
Jul  5 14:18:41.590: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-bc7d7377-2e53-4c4c-8cb4-91033681c311 container client-container: 
STEP: delete the pod
Jul  5 14:18:41.641: INFO: Waiting for pod downwardapi-volume-bc7d7377-2e53-4c4c-8cb4-91033681c311 to disappear
Jul  5 14:18:41.744: INFO: Pod downwardapi-volume-bc7d7377-2e53-4c4c-8cb4-91033681c311 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 14:18:41.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9832" for this suite.
Jul  5 14:18:47.762: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 14:18:47.835: INFO: namespace downward-api-9832 deletion completed in 6.08693706s

• [SLOW TEST:10.383 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 14:18:47.836: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Jul  5 14:18:52.494: INFO: Successfully updated pod "labelsupdate57d9d966-fc74-418b-bdea-099290f8b700"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 14:18:56.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3749" for this suite.
Jul  5 14:19:18.539: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 14:19:18.614: INFO: namespace projected-3749 deletion completed in 22.094645924s

• [SLOW TEST:30.777 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 14:19:18.614: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override all
Jul  5 14:19:18.675: INFO: Waiting up to 5m0s for pod "client-containers-d548445a-1f78-449c-8271-7951cf7db837" in namespace "containers-4652" to be "success or failure"
Jul  5 14:19:18.678: INFO: Pod "client-containers-d548445a-1f78-449c-8271-7951cf7db837": Phase="Pending", Reason="", readiness=false. Elapsed: 3.144149ms
Jul  5 14:19:20.682: INFO: Pod "client-containers-d548445a-1f78-449c-8271-7951cf7db837": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00708622s
Jul  5 14:19:22.687: INFO: Pod "client-containers-d548445a-1f78-449c-8271-7951cf7db837": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011632386s
STEP: Saw pod success
Jul  5 14:19:22.687: INFO: Pod "client-containers-d548445a-1f78-449c-8271-7951cf7db837" satisfied condition "success or failure"
Jul  5 14:19:22.690: INFO: Trying to get logs from node iruya-worker2 pod client-containers-d548445a-1f78-449c-8271-7951cf7db837 container test-container: 
STEP: delete the pod
Jul  5 14:19:22.713: INFO: Waiting for pod client-containers-d548445a-1f78-449c-8271-7951cf7db837 to disappear
Jul  5 14:19:22.717: INFO: Pod client-containers-d548445a-1f78-449c-8271-7951cf7db837 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 14:19:22.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-4652" for this suite.
Jul  5 14:19:28.746: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 14:19:28.829: INFO: namespace containers-4652 deletion completed in 6.108227382s

• [SLOW TEST:10.215 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 14:19:28.829: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Jul  5 14:19:28.972: INFO: Waiting up to 5m0s for pod "downward-api-23420840-4c6d-42f3-88a5-b2ade22aeb5e" in namespace "downward-api-1808" to be "success or failure"
Jul  5 14:19:28.993: INFO: Pod "downward-api-23420840-4c6d-42f3-88a5-b2ade22aeb5e": Phase="Pending", Reason="", readiness=false. Elapsed: 20.99618ms
Jul  5 14:19:30.998: INFO: Pod "downward-api-23420840-4c6d-42f3-88a5-b2ade22aeb5e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026099981s
Jul  5 14:19:33.003: INFO: Pod "downward-api-23420840-4c6d-42f3-88a5-b2ade22aeb5e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031087672s
STEP: Saw pod success
Jul  5 14:19:33.003: INFO: Pod "downward-api-23420840-4c6d-42f3-88a5-b2ade22aeb5e" satisfied condition "success or failure"
Jul  5 14:19:33.006: INFO: Trying to get logs from node iruya-worker2 pod downward-api-23420840-4c6d-42f3-88a5-b2ade22aeb5e container dapi-container: 
STEP: delete the pod
Jul  5 14:19:33.225: INFO: Waiting for pod downward-api-23420840-4c6d-42f3-88a5-b2ade22aeb5e to disappear
Jul  5 14:19:33.251: INFO: Pod downward-api-23420840-4c6d-42f3-88a5-b2ade22aeb5e no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 14:19:33.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1808" for this suite.
Jul  5 14:19:39.314: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 14:19:39.447: INFO: namespace downward-api-1808 deletion completed in 6.192758628s

• [SLOW TEST:10.619 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 14:19:39.448: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jul  5 14:19:43.600: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 14:19:43.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-3124" for this suite.
Jul  5 14:19:49.683: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 14:19:49.779: INFO: namespace container-runtime-3124 deletion completed in 6.10862558s

• [SLOW TEST:10.331 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 14:19:49.779: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jul  5 14:19:49.890: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d159dc25-7476-470f-876a-3afa09f40065" in namespace "downward-api-6305" to be "success or failure"
Jul  5 14:19:49.951: INFO: Pod "downwardapi-volume-d159dc25-7476-470f-876a-3afa09f40065": Phase="Pending", Reason="", readiness=false. Elapsed: 60.490538ms
Jul  5 14:19:51.955: INFO: Pod "downwardapi-volume-d159dc25-7476-470f-876a-3afa09f40065": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064692067s
Jul  5 14:19:53.959: INFO: Pod "downwardapi-volume-d159dc25-7476-470f-876a-3afa09f40065": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.068954433s
STEP: Saw pod success
Jul  5 14:19:53.959: INFO: Pod "downwardapi-volume-d159dc25-7476-470f-876a-3afa09f40065" satisfied condition "success or failure"
Jul  5 14:19:53.962: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-d159dc25-7476-470f-876a-3afa09f40065 container client-container: 
STEP: delete the pod
Jul  5 14:19:53.995: INFO: Waiting for pod downwardapi-volume-d159dc25-7476-470f-876a-3afa09f40065 to disappear
Jul  5 14:19:54.008: INFO: Pod downwardapi-volume-d159dc25-7476-470f-876a-3afa09f40065 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 14:19:54.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6305" for this suite.
Jul  5 14:20:00.024: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 14:20:00.099: INFO: namespace downward-api-6305 deletion completed in 6.086128971s

• [SLOW TEST:10.320 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 14:20:00.100: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-secret-cqz8
STEP: Creating a pod to test atomic-volume-subpath
Jul  5 14:20:00.223: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-cqz8" in namespace "subpath-6789" to be "success or failure"
Jul  5 14:20:00.227: INFO: Pod "pod-subpath-test-secret-cqz8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.572991ms
Jul  5 14:20:02.232: INFO: Pod "pod-subpath-test-secret-cqz8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008489819s
Jul  5 14:20:04.237: INFO: Pod "pod-subpath-test-secret-cqz8": Phase="Running", Reason="", readiness=true. Elapsed: 4.012979641s
Jul  5 14:20:06.241: INFO: Pod "pod-subpath-test-secret-cqz8": Phase="Running", Reason="", readiness=true. Elapsed: 6.017765948s
Jul  5 14:20:08.245: INFO: Pod "pod-subpath-test-secret-cqz8": Phase="Running", Reason="", readiness=true. Elapsed: 8.02193686s
Jul  5 14:20:10.250: INFO: Pod "pod-subpath-test-secret-cqz8": Phase="Running", Reason="", readiness=true. Elapsed: 10.026476408s
Jul  5 14:20:12.254: INFO: Pod "pod-subpath-test-secret-cqz8": Phase="Running", Reason="", readiness=true. Elapsed: 12.030959301s
Jul  5 14:20:14.259: INFO: Pod "pod-subpath-test-secret-cqz8": Phase="Running", Reason="", readiness=true. Elapsed: 14.03498118s
Jul  5 14:20:16.263: INFO: Pod "pod-subpath-test-secret-cqz8": Phase="Running", Reason="", readiness=true. Elapsed: 16.039522619s
Jul  5 14:20:18.268: INFO: Pod "pod-subpath-test-secret-cqz8": Phase="Running", Reason="", readiness=true. Elapsed: 18.044067625s
Jul  5 14:20:20.272: INFO: Pod "pod-subpath-test-secret-cqz8": Phase="Running", Reason="", readiness=true. Elapsed: 20.04861745s
Jul  5 14:20:22.277: INFO: Pod "pod-subpath-test-secret-cqz8": Phase="Running", Reason="", readiness=true. Elapsed: 22.053028613s
Jul  5 14:20:24.280: INFO: Pod "pod-subpath-test-secret-cqz8": Phase="Running", Reason="", readiness=true. Elapsed: 24.056930947s
Jul  5 14:20:26.285: INFO: Pod "pod-subpath-test-secret-cqz8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.061563617s
STEP: Saw pod success
Jul  5 14:20:26.285: INFO: Pod "pod-subpath-test-secret-cqz8" satisfied condition "success or failure"
Jul  5 14:20:26.288: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-secret-cqz8 container test-container-subpath-secret-cqz8: 
STEP: delete the pod
Jul  5 14:20:26.339: INFO: Waiting for pod pod-subpath-test-secret-cqz8 to disappear
Jul  5 14:20:26.348: INFO: Pod pod-subpath-test-secret-cqz8 no longer exists
STEP: Deleting pod pod-subpath-test-secret-cqz8
Jul  5 14:20:26.348: INFO: Deleting pod "pod-subpath-test-secret-cqz8" in namespace "subpath-6789"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 14:20:26.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-6789" for this suite.
Jul  5 14:20:32.364: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 14:20:32.439: INFO: namespace subpath-6789 deletion completed in 6.085143033s

• [SLOW TEST:32.340 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 14:20:32.439: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Jul  5 14:20:37.032: INFO: Successfully updated pod "annotationupdatecc9d4881-1543-4a4e-a593-30b903b273a8"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 14:20:41.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7787" for this suite.
Jul  5 14:21:03.091: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 14:21:03.167: INFO: namespace projected-7787 deletion completed in 22.089722058s

• [SLOW TEST:30.728 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 14:21:03.167: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-5190
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-5190
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5190
Jul  5 14:21:03.317: INFO: Found 0 stateful pods, waiting for 1
Jul  5 14:21:13.322: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Jul  5 14:21:13.326: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5190 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jul  5 14:21:13.599: INFO: stderr: "I0705 14:21:13.472647    1889 log.go:172] (0xc000acc630) (0xc000736aa0) Create stream\nI0705 14:21:13.472716    1889 log.go:172] (0xc000acc630) (0xc000736aa0) Stream added, broadcasting: 1\nI0705 14:21:13.476741    1889 log.go:172] (0xc000acc630) Reply frame received for 1\nI0705 14:21:13.476789    1889 log.go:172] (0xc000acc630) (0xc0007361e0) Create stream\nI0705 14:21:13.476805    1889 log.go:172] (0xc000acc630) (0xc0007361e0) Stream added, broadcasting: 3\nI0705 14:21:13.477786    1889 log.go:172] (0xc000acc630) Reply frame received for 3\nI0705 14:21:13.477819    1889 log.go:172] (0xc000acc630) (0xc000736280) Create stream\nI0705 14:21:13.477829    1889 log.go:172] (0xc000acc630) (0xc000736280) Stream added, broadcasting: 5\nI0705 14:21:13.478617    1889 log.go:172] (0xc000acc630) Reply frame received for 5\nI0705 14:21:13.558914    1889 log.go:172] (0xc000acc630) Data frame received for 5\nI0705 14:21:13.558944    1889 log.go:172] (0xc000736280) (5) Data frame handling\nI0705 14:21:13.558965    1889 log.go:172] (0xc000736280) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0705 14:21:13.590809    1889 log.go:172] (0xc000acc630) Data frame received for 5\nI0705 14:21:13.590858    1889 log.go:172] (0xc000736280) (5) Data frame handling\nI0705 14:21:13.590894    1889 log.go:172] (0xc000acc630) Data frame received for 3\nI0705 14:21:13.590913    1889 log.go:172] (0xc0007361e0) (3) Data frame handling\nI0705 14:21:13.590934    1889 log.go:172] (0xc0007361e0) (3) Data frame sent\nI0705 14:21:13.591446    1889 log.go:172] (0xc000acc630) Data frame received for 3\nI0705 14:21:13.591476    1889 log.go:172] (0xc0007361e0) (3) Data frame handling\nI0705 14:21:13.593800    1889 log.go:172] (0xc000acc630) Data frame received for 1\nI0705 14:21:13.593827    1889 log.go:172] (0xc000736aa0) (1) Data frame handling\nI0705 14:21:13.593839    1889 log.go:172] (0xc000736aa0) (1) Data frame sent\nI0705 14:21:13.593855    1889 log.go:172] (0xc000acc630) (0xc000736aa0) Stream removed, broadcasting: 1\nI0705 14:21:13.593867    1889 log.go:172] (0xc000acc630) Go away received\nI0705 14:21:13.594428    1889 log.go:172] (0xc000acc630) (0xc000736aa0) Stream removed, broadcasting: 1\nI0705 14:21:13.594452    1889 log.go:172] (0xc000acc630) (0xc0007361e0) Stream removed, broadcasting: 3\nI0705 14:21:13.594465    1889 log.go:172] (0xc000acc630) (0xc000736280) Stream removed, broadcasting: 5\n"
Jul  5 14:21:13.600: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jul  5 14:21:13.600: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jul  5 14:21:13.604: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jul  5 14:21:23.608: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jul  5 14:21:23.608: INFO: Waiting for statefulset status.replicas updated to 0
Jul  5 14:21:23.620: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999461s
Jul  5 14:21:24.626: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.994684423s
Jul  5 14:21:25.629: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.989747314s
Jul  5 14:21:26.634: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.985937426s
Jul  5 14:21:27.638: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.981608263s
Jul  5 14:21:28.642: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.977336032s
Jul  5 14:21:29.647: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.973013678s
Jul  5 14:21:30.652: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.96825318s
Jul  5 14:21:31.656: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.96361318s
Jul  5 14:21:32.661: INFO: Verifying statefulset ss doesn't scale past 1 for another 959.010605ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5190
Jul  5 14:21:33.666: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5190 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul  5 14:21:33.941: INFO: stderr: "I0705 14:21:33.823718    1911 log.go:172] (0xc000a72420) (0xc0007886e0) Create stream\nI0705 14:21:33.823783    1911 log.go:172] (0xc000a72420) (0xc0007886e0) Stream added, broadcasting: 1\nI0705 14:21:33.831823    1911 log.go:172] (0xc000a72420) Reply frame received for 1\nI0705 14:21:33.831877    1911 log.go:172] (0xc000a72420) (0xc000788000) Create stream\nI0705 14:21:33.831888    1911 log.go:172] (0xc000a72420) (0xc000788000) Stream added, broadcasting: 3\nI0705 14:21:33.832881    1911 log.go:172] (0xc000a72420) Reply frame received for 3\nI0705 14:21:33.832944    1911 log.go:172] (0xc000a72420) (0xc0006fa140) Create stream\nI0705 14:21:33.832962    1911 log.go:172] (0xc000a72420) (0xc0006fa140) Stream added, broadcasting: 5\nI0705 14:21:33.834181    1911 log.go:172] (0xc000a72420) Reply frame received for 5\nI0705 14:21:33.932841    1911 log.go:172] (0xc000a72420) Data frame received for 3\nI0705 14:21:33.932879    1911 log.go:172] (0xc000788000) (3) Data frame handling\nI0705 14:21:33.932894    1911 log.go:172] (0xc000788000) (3) Data frame sent\nI0705 14:21:33.932908    1911 log.go:172] (0xc000a72420) Data frame received for 3\nI0705 14:21:33.932918    1911 log.go:172] (0xc000788000) (3) Data frame handling\nI0705 14:21:33.932968    1911 log.go:172] (0xc000a72420) Data frame received for 5\nI0705 14:21:33.933005    1911 log.go:172] (0xc0006fa140) (5) Data frame handling\nI0705 14:21:33.933030    1911 log.go:172] (0xc0006fa140) (5) Data frame sent\nI0705 14:21:33.933046    1911 log.go:172] (0xc000a72420) Data frame received for 5\nI0705 14:21:33.933057    1911 log.go:172] (0xc0006fa140) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0705 14:21:33.935152    1911 log.go:172] (0xc000a72420) Data frame received for 1\nI0705 14:21:33.935189    1911 log.go:172] (0xc0007886e0) (1) Data frame handling\nI0705 14:21:33.935244    1911 log.go:172] (0xc0007886e0) (1) Data frame sent\nI0705 14:21:33.935282    1911 log.go:172] (0xc000a72420) (0xc0007886e0) Stream removed, broadcasting: 1\nI0705 14:21:33.935314    1911 log.go:172] (0xc000a72420) Go away received\nI0705 14:21:33.935754    1911 log.go:172] (0xc000a72420) (0xc0007886e0) Stream removed, broadcasting: 1\nI0705 14:21:33.935779    1911 log.go:172] (0xc000a72420) (0xc000788000) Stream removed, broadcasting: 3\nI0705 14:21:33.935791    1911 log.go:172] (0xc000a72420) (0xc0006fa140) Stream removed, broadcasting: 5\n"
Jul  5 14:21:33.941: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jul  5 14:21:33.941: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jul  5 14:21:33.945: INFO: Found 1 stateful pods, waiting for 3
Jul  5 14:21:43.950: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jul  5 14:21:43.950: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jul  5 14:21:43.950: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Jul  5 14:21:43.957: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5190 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jul  5 14:21:44.191: INFO: stderr: "I0705 14:21:44.088984    1931 log.go:172] (0xc0005fa790) (0xc000656be0) Create stream\nI0705 14:21:44.089045    1931 log.go:172] (0xc0005fa790) (0xc000656be0) Stream added, broadcasting: 1\nI0705 14:21:44.093297    1931 log.go:172] (0xc0005fa790) Reply frame received for 1\nI0705 14:21:44.093363    1931 log.go:172] (0xc0005fa790) (0xc0008ca000) Create stream\nI0705 14:21:44.093381    1931 log.go:172] (0xc0005fa790) (0xc0008ca000) Stream added, broadcasting: 3\nI0705 14:21:44.098342    1931 log.go:172] (0xc0005fa790) Reply frame received for 3\nI0705 14:21:44.098407    1931 log.go:172] (0xc0005fa790) (0xc000656c80) Create stream\nI0705 14:21:44.098431    1931 log.go:172] (0xc0005fa790) (0xc000656c80) Stream added, broadcasting: 5\nI0705 14:21:44.099397    1931 log.go:172] (0xc0005fa790) Reply frame received for 5\nI0705 14:21:44.183613    1931 log.go:172] (0xc0005fa790) Data frame received for 5\nI0705 14:21:44.183641    1931 log.go:172] (0xc000656c80) (5) Data frame handling\nI0705 14:21:44.183650    1931 log.go:172] (0xc000656c80) (5) Data frame sent\nI0705 14:21:44.183655    1931 log.go:172] (0xc0005fa790) Data frame received for 5\nI0705 14:21:44.183659    1931 log.go:172] (0xc000656c80) (5) Data frame handling\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0705 14:21:44.183675    1931 log.go:172] (0xc0005fa790) Data frame received for 3\nI0705 14:21:44.183679    1931 log.go:172] (0xc0008ca000) (3) Data frame handling\nI0705 14:21:44.183684    1931 log.go:172] (0xc0008ca000) (3) Data frame sent\nI0705 14:21:44.183688    1931 log.go:172] (0xc0005fa790) Data frame received for 3\nI0705 14:21:44.183691    1931 log.go:172] (0xc0008ca000) (3) Data frame handling\nI0705 14:21:44.185467    1931 log.go:172] (0xc0005fa790) Data frame received for 1\nI0705 14:21:44.185504    1931 log.go:172] (0xc000656be0) (1) Data frame handling\nI0705 14:21:44.185527    1931 log.go:172] (0xc000656be0) (1) Data frame sent\nI0705 14:21:44.185592    1931 log.go:172] (0xc0005fa790) (0xc000656be0) Stream removed, broadcasting: 1\nI0705 14:21:44.185841    1931 log.go:172] (0xc0005fa790) Go away received\nI0705 14:21:44.186027    1931 log.go:172] (0xc0005fa790) (0xc000656be0) Stream removed, broadcasting: 1\nI0705 14:21:44.186065    1931 log.go:172] (0xc0005fa790) (0xc0008ca000) Stream removed, broadcasting: 3\nI0705 14:21:44.186088    1931 log.go:172] (0xc0005fa790) (0xc000656c80) Stream removed, broadcasting: 5\n"
Jul  5 14:21:44.191: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jul  5 14:21:44.191: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jul  5 14:21:44.191: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5190 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jul  5 14:21:44.414: INFO: stderr: "I0705 14:21:44.316987    1952 log.go:172] (0xc000a3e630) (0xc000602aa0) Create stream\nI0705 14:21:44.317045    1952 log.go:172] (0xc000a3e630) (0xc000602aa0) Stream added, broadcasting: 1\nI0705 14:21:44.322093    1952 log.go:172] (0xc000a3e630) Reply frame received for 1\nI0705 14:21:44.322168    1952 log.go:172] (0xc000a3e630) (0xc000a7c000) Create stream\nI0705 14:21:44.322206    1952 log.go:172] (0xc000a3e630) (0xc000a7c000) Stream added, broadcasting: 3\nI0705 14:21:44.324442    1952 log.go:172] (0xc000a3e630) Reply frame received for 3\nI0705 14:21:44.324473    1952 log.go:172] (0xc000a3e630) (0xc000602b40) Create stream\nI0705 14:21:44.324484    1952 log.go:172] (0xc000a3e630) (0xc000602b40) Stream added, broadcasting: 5\nI0705 14:21:44.325552    1952 log.go:172] (0xc000a3e630) Reply frame received for 5\nI0705 14:21:44.378288    1952 log.go:172] (0xc000a3e630) Data frame received for 5\nI0705 14:21:44.378331    1952 log.go:172] (0xc000602b40) (5) Data frame handling\nI0705 14:21:44.378361    1952 log.go:172] (0xc000602b40) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0705 14:21:44.405759    1952 log.go:172] (0xc000a3e630) Data frame received for 3\nI0705 14:21:44.405785    1952 log.go:172] (0xc000a7c000) (3) Data frame handling\nI0705 14:21:44.405804    1952 log.go:172] (0xc000a7c000) (3) Data frame sent\nI0705 14:21:44.406137    1952 log.go:172] (0xc000a3e630) Data frame received for 5\nI0705 14:21:44.406176    1952 log.go:172] (0xc000602b40) (5) Data frame handling\nI0705 14:21:44.406205    1952 log.go:172] (0xc000a3e630) Data frame received for 3\nI0705 14:21:44.406216    1952 log.go:172] (0xc000a7c000) (3) Data frame handling\nI0705 14:21:44.408079    1952 log.go:172] (0xc000a3e630) Data frame received for 1\nI0705 14:21:44.408303    1952 log.go:172] (0xc000602aa0) (1) Data frame handling\nI0705 14:21:44.408324    1952 log.go:172] (0xc000602aa0) (1) Data frame sent\nI0705 14:21:44.408350    1952 log.go:172] (0xc000a3e630) (0xc000602aa0) Stream removed, broadcasting: 1\nI0705 14:21:44.408381    1952 log.go:172] (0xc000a3e630) Go away received\nI0705 14:21:44.408906    1952 log.go:172] (0xc000a3e630) (0xc000602aa0) Stream removed, broadcasting: 1\nI0705 14:21:44.408933    1952 log.go:172] (0xc000a3e630) (0xc000a7c000) Stream removed, broadcasting: 3\nI0705 14:21:44.408961    1952 log.go:172] (0xc000a3e630) (0xc000602b40) Stream removed, broadcasting: 5\n"
Jul  5 14:21:44.414: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jul  5 14:21:44.414: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jul  5 14:21:44.414: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5190 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jul  5 14:21:44.651: INFO: stderr: "I0705 14:21:44.539639    1972 log.go:172] (0xc000116fd0) (0xc0005bcb40) Create stream\nI0705 14:21:44.539689    1972 log.go:172] (0xc000116fd0) (0xc0005bcb40) Stream added, broadcasting: 1\nI0705 14:21:44.541526    1972 log.go:172] (0xc000116fd0) Reply frame received for 1\nI0705 14:21:44.541569    1972 log.go:172] (0xc000116fd0) (0xc000998000) Create stream\nI0705 14:21:44.541589    1972 log.go:172] (0xc000116fd0) (0xc000998000) Stream added, broadcasting: 3\nI0705 14:21:44.542485    1972 log.go:172] (0xc000116fd0) Reply frame received for 3\nI0705 14:21:44.542524    1972 log.go:172] (0xc000116fd0) (0xc000a60000) Create stream\nI0705 14:21:44.542542    1972 log.go:172] (0xc000116fd0) (0xc000a60000) Stream added, broadcasting: 5\nI0705 14:21:44.543470    1972 log.go:172] (0xc000116fd0) Reply frame received for 5\nI0705 14:21:44.613994    1972 log.go:172] (0xc000116fd0) Data frame received for 5\nI0705 14:21:44.614024    1972 log.go:172] (0xc000a60000) (5) Data frame handling\nI0705 14:21:44.614043    1972 log.go:172] (0xc000a60000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0705 14:21:44.643657    1972 log.go:172] (0xc000116fd0) Data frame received for 3\nI0705 14:21:44.643691    1972 log.go:172] (0xc000998000) (3) Data frame handling\nI0705 14:21:44.643708    1972 log.go:172] (0xc000998000) (3) Data frame sent\nI0705 14:21:44.643718    1972 log.go:172] (0xc000116fd0) Data frame received for 3\nI0705 14:21:44.643726    1972 log.go:172] (0xc000998000) (3) Data frame handling\nI0705 14:21:44.643780    1972 log.go:172] (0xc000116fd0) Data frame received for 5\nI0705 14:21:44.643810    1972 log.go:172] (0xc000a60000) (5) Data frame handling\nI0705 14:21:44.646135    1972 log.go:172] (0xc000116fd0) Data frame received for 1\nI0705 14:21:44.646178    1972 log.go:172] (0xc0005bcb40) (1) Data frame handling\nI0705 14:21:44.646205    1972 log.go:172] (0xc0005bcb40) (1) Data frame sent\nI0705 14:21:44.646245    1972 log.go:172] (0xc000116fd0) (0xc0005bcb40) Stream removed, broadcasting: 1\nI0705 14:21:44.646275    1972 log.go:172] (0xc000116fd0) Go away received\nI0705 14:21:44.646825    1972 log.go:172] (0xc000116fd0) (0xc0005bcb40) Stream removed, broadcasting: 1\nI0705 14:21:44.646853    1972 log.go:172] (0xc000116fd0) (0xc000998000) Stream removed, broadcasting: 3\nI0705 14:21:44.646871    1972 log.go:172] (0xc000116fd0) (0xc000a60000) Stream removed, broadcasting: 5\n"
Jul  5 14:21:44.651: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jul  5 14:21:44.651: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jul  5 14:21:44.651: INFO: Waiting for statefulset status.replicas updated to 0
Jul  5 14:21:44.655: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Jul  5 14:21:54.667: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jul  5 14:21:54.667: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jul  5 14:21:54.667: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jul  5 14:21:54.783: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999668s
Jul  5 14:21:55.794: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.889985914s
Jul  5 14:21:56.799: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.878625934s
Jul  5 14:21:57.805: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.873272688s
Jul  5 14:21:58.809: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.867808126s
Jul  5 14:21:59.814: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.863842225s
Jul  5 14:22:00.820: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.858766118s
Jul  5 14:22:01.825: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.85269797s
Jul  5 14:22:02.829: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.847315841s
Jul  5 14:22:03.834: INFO: Verifying statefulset ss doesn't scale past 3 for another 843.183847ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5190
Jul  5 14:22:04.839: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5190 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul  5 14:22:05.092: INFO: stderr: "I0705 14:22:04.977104    1992 log.go:172] (0xc000aa4420) (0xc00051a6e0) Create stream\nI0705 14:22:04.977323    1992 log.go:172] (0xc000aa4420) (0xc00051a6e0) Stream added, broadcasting: 1\nI0705 14:22:04.981468    1992 log.go:172] (0xc000aa4420) Reply frame received for 1\nI0705 14:22:04.981511    1992 log.go:172] (0xc000aa4420) (0xc00051a780) Create stream\nI0705 14:22:04.981526    1992 log.go:172] (0xc000aa4420) (0xc00051a780) Stream added, broadcasting: 3\nI0705 14:22:04.983493    1992 log.go:172] (0xc000aa4420) Reply frame received for 3\nI0705 14:22:04.983545    1992 log.go:172] (0xc000aa4420) (0xc00051a820) Create stream\nI0705 14:22:04.983556    1992 log.go:172] (0xc000aa4420) (0xc00051a820) Stream added, broadcasting: 5\nI0705 14:22:04.985427    1992 log.go:172] (0xc000aa4420) Reply frame received for 5\nI0705 14:22:05.084712    1992 log.go:172] (0xc000aa4420) Data frame received for 5\nI0705 14:22:05.084759    1992 log.go:172] (0xc00051a820) (5) Data frame handling\nI0705 14:22:05.084773    1992 log.go:172] (0xc00051a820) (5) Data frame sent\nI0705 14:22:05.084784    1992 log.go:172] (0xc000aa4420) Data frame received for 5\nI0705 14:22:05.084794    1992 log.go:172] (0xc00051a820) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0705 14:22:05.084819    1992 log.go:172] (0xc000aa4420) Data frame received for 3\nI0705 14:22:05.084829    1992 log.go:172] (0xc00051a780) (3) Data frame handling\nI0705 14:22:05.084840    1992 log.go:172] (0xc00051a780) (3) Data frame sent\nI0705 14:22:05.084850    1992 log.go:172] (0xc000aa4420) Data frame received for 3\nI0705 14:22:05.084860    1992 log.go:172] (0xc00051a780) (3) Data frame handling\nI0705 14:22:05.086673    1992 log.go:172] (0xc000aa4420) Data frame received for 1\nI0705 14:22:05.086700    1992 log.go:172] (0xc00051a6e0) (1) Data frame handling\nI0705 14:22:05.086726    1992 log.go:172] (0xc00051a6e0) (1) Data frame sent\nI0705 14:22:05.086752    1992 log.go:172] (0xc000aa4420) (0xc00051a6e0) Stream removed, broadcasting: 1\nI0705 14:22:05.086776    1992 log.go:172] (0xc000aa4420) Go away received\nI0705 14:22:05.087008    1992 log.go:172] (0xc000aa4420) (0xc00051a6e0) Stream removed, broadcasting: 1\nI0705 14:22:05.087029    1992 log.go:172] (0xc000aa4420) (0xc00051a780) Stream removed, broadcasting: 3\nI0705 14:22:05.087037    1992 log.go:172] (0xc000aa4420) (0xc00051a820) Stream removed, broadcasting: 5\n"
Jul  5 14:22:05.092: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jul  5 14:22:05.092: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jul  5 14:22:05.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5190 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul  5 14:22:05.295: INFO: stderr: "I0705 14:22:05.210650    2012 log.go:172] (0xc0008a8370) (0xc000292820) Create stream\nI0705 14:22:05.210700    2012 log.go:172] (0xc0008a8370) (0xc000292820) Stream added, broadcasting: 1\nI0705 14:22:05.212857    2012 log.go:172] (0xc0008a8370) Reply frame received for 1\nI0705 14:22:05.212901    2012 log.go:172] (0xc0008a8370) (0xc000912000) Create stream\nI0705 14:22:05.212912    2012 log.go:172] (0xc0008a8370) (0xc000912000) Stream added, broadcasting: 3\nI0705 14:22:05.213884    2012 log.go:172] (0xc0008a8370) Reply frame received for 3\nI0705 14:22:05.213912    2012 log.go:172] (0xc0008a8370) (0xc000612280) Create stream\nI0705 14:22:05.213921    2012 log.go:172] (0xc0008a8370) (0xc000612280) Stream added, broadcasting: 5\nI0705 14:22:05.214734    2012 log.go:172] (0xc0008a8370) Reply frame received for 5\nI0705 14:22:05.288583    2012 log.go:172] (0xc0008a8370) Data frame received for 5\nI0705 14:22:05.288624    2012 log.go:172] (0xc000612280) (5) Data frame handling\nI0705 14:22:05.288641    2012 log.go:172] (0xc000612280) (5) Data frame sent\nI0705 14:22:05.288653    2012 log.go:172] (0xc0008a8370) Data frame received for 5\nI0705 14:22:05.288665    2012 log.go:172] (0xc000612280) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0705 14:22:05.288695    2012 log.go:172] (0xc0008a8370) Data frame received for 3\nI0705 14:22:05.288730    2012 log.go:172] (0xc000912000) (3) Data frame handling\nI0705 14:22:05.288754    2012 log.go:172] (0xc000912000) (3) Data frame sent\nI0705 14:22:05.288769    2012 log.go:172] (0xc0008a8370) Data frame received for 3\nI0705 14:22:05.288780    2012 log.go:172] (0xc000912000) (3) Data frame handling\nI0705 14:22:05.290873    2012 log.go:172] (0xc0008a8370) Data frame received for 1\nI0705 14:22:05.290899    2012 log.go:172] (0xc000292820) (1) Data frame handling\nI0705 14:22:05.290920    2012 log.go:172] (0xc000292820) (1) Data frame sent\nI0705 14:22:05.291493    2012 log.go:172] (0xc0008a8370) (0xc000292820) Stream removed, broadcasting: 1\nI0705 14:22:05.291557    2012 log.go:172] (0xc0008a8370) Go away received\nI0705 14:22:05.291932    2012 log.go:172] (0xc0008a8370) (0xc000292820) Stream removed, broadcasting: 1\nI0705 14:22:05.291954    2012 log.go:172] (0xc0008a8370) (0xc000912000) Stream removed, broadcasting: 3\nI0705 14:22:05.291966    2012 log.go:172] (0xc0008a8370) (0xc000612280) Stream removed, broadcasting: 5\n"
Jul  5 14:22:05.296: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jul  5 14:22:05.296: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jul  5 14:22:05.296: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5190 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul  5 14:22:05.494: INFO: stderr: "I0705 14:22:05.410787    2034 log.go:172] (0xc0007e4580) (0xc0006fea00) Create stream\nI0705 14:22:05.410853    2034 log.go:172] (0xc0007e4580) (0xc0006fea00) Stream added, broadcasting: 1\nI0705 14:22:05.412857    2034 log.go:172] (0xc0007e4580) Reply frame received for 1\nI0705 14:22:05.412904    2034 log.go:172] (0xc0007e4580) (0xc0008a4000) Create stream\nI0705 14:22:05.412920    2034 log.go:172] (0xc0007e4580) (0xc0008a4000) Stream added, broadcasting: 3\nI0705 14:22:05.413967    2034 log.go:172] (0xc0007e4580) Reply frame received for 3\nI0705 14:22:05.414010    2034 log.go:172] (0xc0007e4580) (0xc0008d2000) Create stream\nI0705 14:22:05.414028    2034 log.go:172] (0xc0007e4580) (0xc0008d2000) Stream added, broadcasting: 5\nI0705 14:22:05.414907    2034 log.go:172] (0xc0007e4580) Reply frame received for 5\nI0705 14:22:05.486767    2034 log.go:172] (0xc0007e4580) Data frame received for 3\nI0705 14:22:05.486798    2034 log.go:172] (0xc0008a4000) (3) Data frame handling\nI0705 14:22:05.486810    2034 log.go:172] (0xc0008a4000) (3) Data frame sent\nI0705 14:22:05.486816    2034 log.go:172] (0xc0007e4580) Data frame received for 3\nI0705 14:22:05.486821    2034 log.go:172] (0xc0008a4000) (3) Data frame handling\nI0705 14:22:05.486873    2034 log.go:172] (0xc0007e4580) Data frame received for 5\nI0705 14:22:05.486952    2034 log.go:172] (0xc0008d2000) (5) Data frame handling\nI0705 14:22:05.486976    2034 log.go:172] (0xc0008d2000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0705 14:22:05.486993    2034 log.go:172] (0xc0007e4580) Data frame received for 5\nI0705 14:22:05.487004    2034 log.go:172] (0xc0008d2000) (5) Data frame handling\nI0705 14:22:05.488559    2034 log.go:172] (0xc0007e4580) Data frame received for 1\nI0705 14:22:05.488595    2034 log.go:172] (0xc0006fea00) (1) Data frame handling\nI0705 14:22:05.488619    2034 log.go:172] (0xc0006fea00) (1) Data frame sent\nI0705 14:22:05.488657    2034 log.go:172] (0xc0007e4580) (0xc0006fea00) Stream removed, broadcasting: 1\nI0705 14:22:05.488703    2034 log.go:172] (0xc0007e4580) Go away received\nI0705 14:22:05.489760    2034 log.go:172] (0xc0007e4580) (0xc0006fea00) Stream removed, broadcasting: 1\nI0705 14:22:05.489791    2034 log.go:172] (0xc0007e4580) (0xc0008a4000) Stream removed, broadcasting: 3\nI0705 14:22:05.489825    2034 log.go:172] (0xc0007e4580) (0xc0008d2000) Stream removed, broadcasting: 5\n"
Jul  5 14:22:05.494: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jul  5 14:22:05.494: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jul  5 14:22:05.494: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Jul  5 14:22:25.548: INFO: Deleting all statefulset in ns statefulset-5190
Jul  5 14:22:25.551: INFO: Scaling statefulset ss to 0
Jul  5 14:22:25.560: INFO: Waiting for statefulset status.replicas updated to 0
Jul  5 14:22:25.562: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 14:22:25.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-5190" for this suite.
Jul  5 14:22:31.653: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 14:22:31.719: INFO: namespace statefulset-5190 deletion completed in 6.123073628s

• [SLOW TEST:88.552 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 14:22:31.719: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jul  5 14:22:31.838: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Jul  5 14:22:31.860: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 14:22:31.883: INFO: Number of nodes with available pods: 0
Jul  5 14:22:31.883: INFO: Node iruya-worker is running more than one daemon pod
Jul  5 14:22:32.888: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 14:22:32.892: INFO: Number of nodes with available pods: 0
Jul  5 14:22:32.892: INFO: Node iruya-worker is running more than one daemon pod
Jul  5 14:22:33.979: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 14:22:33.982: INFO: Number of nodes with available pods: 0
Jul  5 14:22:33.982: INFO: Node iruya-worker is running more than one daemon pod
Jul  5 14:22:34.888: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 14:22:34.890: INFO: Number of nodes with available pods: 0
Jul  5 14:22:34.890: INFO: Node iruya-worker is running more than one daemon pod
Jul  5 14:22:35.890: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 14:22:35.894: INFO: Number of nodes with available pods: 1
Jul  5 14:22:35.894: INFO: Node iruya-worker2 is running more than one daemon pod
Jul  5 14:22:36.888: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 14:22:36.891: INFO: Number of nodes with available pods: 2
Jul  5 14:22:36.891: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Jul  5 14:22:36.984: INFO: Wrong image for pod: daemon-set-kn76m. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul  5 14:22:36.984: INFO: Wrong image for pod: daemon-set-q222l. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul  5 14:22:37.034: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 14:22:38.039: INFO: Wrong image for pod: daemon-set-kn76m. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul  5 14:22:38.039: INFO: Wrong image for pod: daemon-set-q222l. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul  5 14:22:38.043: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 14:22:39.039: INFO: Wrong image for pod: daemon-set-kn76m. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul  5 14:22:39.039: INFO: Wrong image for pod: daemon-set-q222l. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul  5 14:22:39.044: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 14:22:40.039: INFO: Wrong image for pod: daemon-set-kn76m. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul  5 14:22:40.039: INFO: Wrong image for pod: daemon-set-q222l. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul  5 14:22:40.039: INFO: Pod daemon-set-q222l is not available
Jul  5 14:22:40.043: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 14:22:41.039: INFO: Wrong image for pod: daemon-set-kn76m. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul  5 14:22:41.039: INFO: Pod daemon-set-th8lm is not available
Jul  5 14:22:41.043: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 14:22:42.048: INFO: Wrong image for pod: daemon-set-kn76m. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul  5 14:22:42.048: INFO: Pod daemon-set-th8lm is not available
Jul  5 14:22:42.051: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 14:22:43.039: INFO: Wrong image for pod: daemon-set-kn76m. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul  5 14:22:43.039: INFO: Pod daemon-set-th8lm is not available
Jul  5 14:22:43.042: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 14:22:44.039: INFO: Wrong image for pod: daemon-set-kn76m. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul  5 14:22:44.043: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 14:22:45.039: INFO: Wrong image for pod: daemon-set-kn76m. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul  5 14:22:45.043: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 14:22:46.039: INFO: Wrong image for pod: daemon-set-kn76m. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jul  5 14:22:46.039: INFO: Pod daemon-set-kn76m is not available
Jul  5 14:22:46.044: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 14:22:47.038: INFO: Pod daemon-set-f4ngk is not available
Jul  5 14:22:47.042: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
STEP: Check that daemon pods are still running on every node of the cluster.
Jul  5 14:22:47.046: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 14:22:47.049: INFO: Number of nodes with available pods: 1
Jul  5 14:22:47.049: INFO: Node iruya-worker2 is running more than one daemon pod
Jul  5 14:22:48.055: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 14:22:48.059: INFO: Number of nodes with available pods: 1
Jul  5 14:22:48.059: INFO: Node iruya-worker2 is running more than one daemon pod
Jul  5 14:22:49.055: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 14:22:49.058: INFO: Number of nodes with available pods: 1
Jul  5 14:22:49.058: INFO: Node iruya-worker2 is running more than one daemon pod
Jul  5 14:22:50.055: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul  5 14:22:50.059: INFO: Number of nodes with available pods: 2
Jul  5 14:22:50.059: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5922, will wait for the garbage collector to delete the pods
Jul  5 14:22:50.135: INFO: Deleting DaemonSet.extensions daemon-set took: 6.980172ms
Jul  5 14:22:50.435: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.276063ms
Jul  5 14:22:55.938: INFO: Number of nodes with available pods: 0
Jul  5 14:22:55.938: INFO: Number of running nodes: 0, number of available pods: 0
Jul  5 14:22:55.940: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5922/daemonsets","resourceVersion":"248504"},"items":null}

Jul  5 14:22:55.943: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5922/pods","resourceVersion":"248504"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 14:22:55.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-5922" for this suite.
Jul  5 14:23:01.972: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 14:23:02.041: INFO: namespace daemonsets-5922 deletion completed in 6.085064906s

• [SLOW TEST:30.322 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 14:23:02.042: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-9381
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating stateful set ss in namespace statefulset-9381
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-9381
Jul  5 14:23:02.171: INFO: Found 0 stateful pods, waiting for 1
Jul  5 14:23:12.176: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Jul  5 14:23:12.180: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9381 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jul  5 14:23:15.135: INFO: stderr: "I0705 14:23:15.007446    2057 log.go:172] (0xc0001166e0) (0xc0006848c0) Create stream\nI0705 14:23:15.007483    2057 log.go:172] (0xc0001166e0) (0xc0006848c0) Stream added, broadcasting: 1\nI0705 14:23:15.009942    2057 log.go:172] (0xc0001166e0) Reply frame received for 1\nI0705 14:23:15.009980    2057 log.go:172] (0xc0001166e0) (0xc0008ac000) Create stream\nI0705 14:23:15.009990    2057 log.go:172] (0xc0001166e0) (0xc0008ac000) Stream added, broadcasting: 3\nI0705 14:23:15.010911    2057 log.go:172] (0xc0001166e0) Reply frame received for 3\nI0705 14:23:15.010963    2057 log.go:172] (0xc0001166e0) (0xc00093c000) Create stream\nI0705 14:23:15.010986    2057 log.go:172] (0xc0001166e0) (0xc00093c000) Stream added, broadcasting: 5\nI0705 14:23:15.011914    2057 log.go:172] (0xc0001166e0) Reply frame received for 5\nI0705 14:23:15.087675    2057 log.go:172] (0xc0001166e0) Data frame received for 5\nI0705 14:23:15.087697    2057 log.go:172] (0xc00093c000) (5) Data frame handling\nI0705 14:23:15.087706    2057 log.go:172] (0xc00093c000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0705 14:23:15.126791    2057 log.go:172] (0xc0001166e0) Data frame received for 3\nI0705 14:23:15.126834    2057 log.go:172] (0xc0008ac000) (3) Data frame handling\nI0705 14:23:15.126869    2057 log.go:172] (0xc0008ac000) (3) Data frame sent\nI0705 14:23:15.126887    2057 log.go:172] (0xc0001166e0) Data frame received for 3\nI0705 14:23:15.126901    2057 log.go:172] (0xc0008ac000) (3) Data frame handling\nI0705 14:23:15.127085    2057 log.go:172] (0xc0001166e0) Data frame received for 5\nI0705 14:23:15.127118    2057 log.go:172] (0xc00093c000) (5) Data frame handling\nI0705 14:23:15.128844    2057 log.go:172] (0xc0001166e0) Data frame received for 1\nI0705 14:23:15.128877    2057 log.go:172] (0xc0006848c0) (1) Data frame handling\nI0705 14:23:15.128905    2057 log.go:172] (0xc0006848c0) (1) Data frame sent\nI0705 14:23:15.129026    2057 log.go:172] (0xc0001166e0) (0xc0006848c0) Stream removed, broadcasting: 1\nI0705 14:23:15.129059    2057 log.go:172] (0xc0001166e0) Go away received\nI0705 14:23:15.129800    2057 log.go:172] (0xc0001166e0) (0xc0006848c0) Stream removed, broadcasting: 1\nI0705 14:23:15.129827    2057 log.go:172] (0xc0001166e0) (0xc0008ac000) Stream removed, broadcasting: 3\nI0705 14:23:15.129840    2057 log.go:172] (0xc0001166e0) (0xc00093c000) Stream removed, broadcasting: 5\n"
Jul  5 14:23:15.135: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jul  5 14:23:15.135: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jul  5 14:23:15.139: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jul  5 14:23:25.145: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jul  5 14:23:25.145: INFO: Waiting for statefulset status.replicas updated to 0
Jul  5 14:23:25.213: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Jul  5 14:23:25.213: INFO: ss-0  iruya-worker  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:02 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:02 +0000 UTC  }]
Jul  5 14:23:25.213: INFO: 
Jul  5 14:23:25.213: INFO: StatefulSet ss has not reached scale 3, at 1
Jul  5 14:23:26.217: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.94112256s
Jul  5 14:23:27.254: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.93695031s
Jul  5 14:23:28.259: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.899633671s
Jul  5 14:23:29.266: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.894287837s
Jul  5 14:23:30.270: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.888261171s
Jul  5 14:23:31.275: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.883980837s
Jul  5 14:23:32.279: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.878971422s
Jul  5 14:23:33.284: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.874744759s
Jul  5 14:23:34.289: INFO: Verifying statefulset ss doesn't scale past 3 for another 869.34616ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-9381
Jul  5 14:23:35.294: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9381 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul  5 14:23:35.517: INFO: stderr: "I0705 14:23:35.424323    2087 log.go:172] (0xc000a6c420) (0xc0006488c0) Create stream\nI0705 14:23:35.424481    2087 log.go:172] (0xc000a6c420) (0xc0006488c0) Stream added, broadcasting: 1\nI0705 14:23:35.427470    2087 log.go:172] (0xc000a6c420) Reply frame received for 1\nI0705 14:23:35.427505    2087 log.go:172] (0xc000a6c420) (0xc000648000) Create stream\nI0705 14:23:35.427514    2087 log.go:172] (0xc000a6c420) (0xc000648000) Stream added, broadcasting: 3\nI0705 14:23:35.428395    2087 log.go:172] (0xc000a6c420) Reply frame received for 3\nI0705 14:23:35.428442    2087 log.go:172] (0xc000a6c420) (0xc000694280) Create stream\nI0705 14:23:35.428457    2087 log.go:172] (0xc000a6c420) (0xc000694280) Stream added, broadcasting: 5\nI0705 14:23:35.429443    2087 log.go:172] (0xc000a6c420) Reply frame received for 5\nI0705 14:23:35.511863    2087 log.go:172] (0xc000a6c420) Data frame received for 3\nI0705 14:23:35.511904    2087 log.go:172] (0xc000648000) (3) Data frame handling\nI0705 14:23:35.511917    2087 log.go:172] (0xc000648000) (3) Data frame sent\nI0705 14:23:35.511924    2087 log.go:172] (0xc000a6c420) Data frame received for 3\nI0705 14:23:35.511931    2087 log.go:172] (0xc000648000) (3) Data frame handling\nI0705 14:23:35.511961    2087 log.go:172] (0xc000a6c420) Data frame received for 5\nI0705 14:23:35.511999    2087 log.go:172] (0xc000694280) (5) Data frame handling\nI0705 14:23:35.512014    2087 log.go:172] (0xc000694280) (5) Data frame sent\nI0705 14:23:35.512026    2087 log.go:172] (0xc000a6c420) Data frame received for 5\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0705 14:23:35.512031    2087 log.go:172] (0xc000694280) (5) Data frame handling\nI0705 14:23:35.513260    2087 log.go:172] (0xc000a6c420) Data frame received for 1\nI0705 14:23:35.513290    2087 log.go:172] (0xc0006488c0) (1) Data frame handling\nI0705 14:23:35.513315    2087 log.go:172] (0xc0006488c0) (1) Data frame sent\nI0705 14:23:35.513338    2087 log.go:172] (0xc000a6c420) (0xc0006488c0) Stream removed, broadcasting: 1\nI0705 14:23:35.513361    2087 log.go:172] (0xc000a6c420) Go away received\nI0705 14:23:35.513728    2087 log.go:172] (0xc000a6c420) (0xc0006488c0) Stream removed, broadcasting: 1\nI0705 14:23:35.513743    2087 log.go:172] (0xc000a6c420) (0xc000648000) Stream removed, broadcasting: 3\nI0705 14:23:35.513750    2087 log.go:172] (0xc000a6c420) (0xc000694280) Stream removed, broadcasting: 5\n"
Jul  5 14:23:35.517: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jul  5 14:23:35.517: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jul  5 14:23:35.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9381 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul  5 14:23:35.724: INFO: stderr: "I0705 14:23:35.644137    2109 log.go:172] (0xc000116f20) (0xc000992640) Create stream\nI0705 14:23:35.644227    2109 log.go:172] (0xc000116f20) (0xc000992640) Stream added, broadcasting: 1\nI0705 14:23:35.646738    2109 log.go:172] (0xc000116f20) Reply frame received for 1\nI0705 14:23:35.646782    2109 log.go:172] (0xc000116f20) (0xc00092e000) Create stream\nI0705 14:23:35.646799    2109 log.go:172] (0xc000116f20) (0xc00092e000) Stream added, broadcasting: 3\nI0705 14:23:35.648071    2109 log.go:172] (0xc000116f20) Reply frame received for 3\nI0705 14:23:35.648105    2109 log.go:172] (0xc000116f20) (0xc0006941e0) Create stream\nI0705 14:23:35.648115    2109 log.go:172] (0xc000116f20) (0xc0006941e0) Stream added, broadcasting: 5\nI0705 14:23:35.649091    2109 log.go:172] (0xc000116f20) Reply frame received for 5\nI0705 14:23:35.717103    2109 log.go:172] (0xc000116f20) Data frame received for 3\nI0705 14:23:35.717331    2109 log.go:172] (0xc00092e000) (3) Data frame handling\nI0705 14:23:35.717346    2109 log.go:172] (0xc00092e000) (3) Data frame sent\nI0705 14:23:35.717476    2109 log.go:172] (0xc000116f20) Data frame received for 3\nI0705 14:23:35.717496    2109 log.go:172] (0xc00092e000) (3) Data frame handling\nI0705 14:23:35.717533    2109 log.go:172] (0xc000116f20) Data frame received for 5\nI0705 14:23:35.717552    2109 log.go:172] (0xc0006941e0) (5) Data frame handling\nI0705 14:23:35.717588    2109 log.go:172] (0xc0006941e0) (5) Data frame sent\nI0705 14:23:35.717609    2109 log.go:172] (0xc000116f20) Data frame received for 5\nI0705 14:23:35.717627    2109 log.go:172] (0xc0006941e0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0705 14:23:35.719087    2109 log.go:172] (0xc000116f20) Data frame received for 1\nI0705 14:23:35.719123    2109 log.go:172] (0xc000992640) (1) Data frame handling\nI0705 14:23:35.719173    2109 log.go:172] (0xc000992640) (1) Data frame sent\nI0705 14:23:35.719218    2109 log.go:172] (0xc000116f20) (0xc000992640) Stream removed, broadcasting: 1\nI0705 14:23:35.719560    2109 log.go:172] (0xc000116f20) Go away received\nI0705 14:23:35.719598    2109 log.go:172] (0xc000116f20) (0xc000992640) Stream removed, broadcasting: 1\nI0705 14:23:35.719633    2109 log.go:172] (0xc000116f20) (0xc00092e000) Stream removed, broadcasting: 3\nI0705 14:23:35.719658    2109 log.go:172] (0xc000116f20) (0xc0006941e0) Stream removed, broadcasting: 5\n"
Jul  5 14:23:35.724: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jul  5 14:23:35.724: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jul  5 14:23:35.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9381 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jul  5 14:23:35.953: INFO: stderr: "I0705 14:23:35.854860    2129 log.go:172] (0xc000a9c420) (0xc000532820) Create stream\nI0705 14:23:35.854920    2129 log.go:172] (0xc000a9c420) (0xc000532820) Stream added, broadcasting: 1\nI0705 14:23:35.860856    2129 log.go:172] (0xc000a9c420) Reply frame received for 1\nI0705 14:23:35.860918    2129 log.go:172] (0xc000a9c420) (0xc00071e320) Create stream\nI0705 14:23:35.860929    2129 log.go:172] (0xc000a9c420) (0xc00071e320) Stream added, broadcasting: 3\nI0705 14:23:35.862824    2129 log.go:172] (0xc000a9c420) Reply frame received for 3\nI0705 14:23:35.862854    2129 log.go:172] (0xc000a9c420) (0xc00071e3c0) Create stream\nI0705 14:23:35.862865    2129 log.go:172] (0xc000a9c420) (0xc00071e3c0) Stream added, broadcasting: 5\nI0705 14:23:35.863950    2129 log.go:172] (0xc000a9c420) Reply frame received for 5\nI0705 14:23:35.945951    2129 log.go:172] (0xc000a9c420) Data frame received for 5\nI0705 14:23:35.945984    2129 log.go:172] (0xc00071e3c0) (5) Data frame handling\nI0705 14:23:35.946005    2129 log.go:172] (0xc00071e3c0) (5) Data frame sent\nI0705 14:23:35.946014    2129 log.go:172] (0xc000a9c420) Data frame received for 5\nI0705 14:23:35.946020    2129 log.go:172] (0xc00071e3c0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0705 14:23:35.946051    2129 log.go:172] (0xc000a9c420) Data frame received for 3\nI0705 14:23:35.946086    2129 log.go:172] (0xc00071e320) (3) Data frame handling\nI0705 14:23:35.946118    2129 log.go:172] (0xc00071e320) (3) Data frame sent\nI0705 14:23:35.946141    2129 log.go:172] (0xc000a9c420) Data frame received for 3\nI0705 14:23:35.946158    2129 log.go:172] (0xc00071e320) (3) Data frame handling\nI0705 14:23:35.948141    2129 log.go:172] (0xc000a9c420) Data frame received for 1\nI0705 14:23:35.948163    2129 log.go:172] (0xc000532820) (1) Data frame handling\nI0705 14:23:35.948176    2129 log.go:172] (0xc000532820) (1) Data frame sent\nI0705 14:23:35.948191    2129 log.go:172] (0xc000a9c420) (0xc000532820) Stream removed, broadcasting: 1\nI0705 14:23:35.948212    2129 log.go:172] (0xc000a9c420) Go away received\nI0705 14:23:35.948669    2129 log.go:172] (0xc000a9c420) (0xc000532820) Stream removed, broadcasting: 1\nI0705 14:23:35.948695    2129 log.go:172] (0xc000a9c420) (0xc00071e320) Stream removed, broadcasting: 3\nI0705 14:23:35.948708    2129 log.go:172] (0xc000a9c420) (0xc00071e3c0) Stream removed, broadcasting: 5\n"
Jul  5 14:23:35.953: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jul  5 14:23:35.953: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jul  5 14:23:35.968: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false
Jul  5 14:23:45.973: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jul  5 14:23:45.973: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jul  5 14:23:45.973: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Jul  5 14:23:45.977: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9381 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jul  5 14:23:46.230: INFO: stderr: "I0705 14:23:46.147919    2149 log.go:172] (0xc0009de370) (0xc0008b66e0) Create stream\nI0705 14:23:46.147977    2149 log.go:172] (0xc0009de370) (0xc0008b66e0) Stream added, broadcasting: 1\nI0705 14:23:46.150748    2149 log.go:172] (0xc0009de370) Reply frame received for 1\nI0705 14:23:46.150799    2149 log.go:172] (0xc0009de370) (0xc000340140) Create stream\nI0705 14:23:46.150816    2149 log.go:172] (0xc0009de370) (0xc000340140) Stream added, broadcasting: 3\nI0705 14:23:46.151765    2149 log.go:172] (0xc0009de370) Reply frame received for 3\nI0705 14:23:46.151798    2149 log.go:172] (0xc0009de370) (0xc0008b6780) Create stream\nI0705 14:23:46.151811    2149 log.go:172] (0xc0009de370) (0xc0008b6780) Stream added, broadcasting: 5\nI0705 14:23:46.152626    2149 log.go:172] (0xc0009de370) Reply frame received for 5\nI0705 14:23:46.224576    2149 log.go:172] (0xc0009de370) Data frame received for 3\nI0705 14:23:46.224634    2149 log.go:172] (0xc000340140) (3) Data frame handling\nI0705 14:23:46.224662    2149 log.go:172] (0xc000340140) (3) Data frame sent\nI0705 14:23:46.224706    2149 log.go:172] (0xc0009de370) Data frame received for 5\nI0705 14:23:46.224728    2149 log.go:172] (0xc0008b6780) (5) Data frame handling\nI0705 14:23:46.224756    2149 log.go:172] (0xc0008b6780) (5) Data frame sent\nI0705 14:23:46.224790    2149 log.go:172] (0xc0009de370) Data frame received for 5\nI0705 14:23:46.224816    2149 log.go:172] (0xc0008b6780) (5) Data frame handling\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0705 14:23:46.224865    2149 log.go:172] (0xc0009de370) Data frame received for 3\nI0705 14:23:46.224888    2149 log.go:172] (0xc000340140) (3) Data frame handling\nI0705 14:23:46.226760    2149 log.go:172] (0xc0009de370) Data frame received for 1\nI0705 14:23:46.226779    2149 log.go:172] (0xc0008b66e0) (1) Data frame handling\nI0705 14:23:46.226786    2149 log.go:172] (0xc0008b66e0) (1) Data frame sent\nI0705 14:23:46.226795    2149 log.go:172] (0xc0009de370) (0xc0008b66e0) Stream removed, broadcasting: 1\nI0705 14:23:46.226878    2149 log.go:172] (0xc0009de370) Go away received\nI0705 14:23:46.227037    2149 log.go:172] (0xc0009de370) (0xc0008b66e0) Stream removed, broadcasting: 1\nI0705 14:23:46.227049    2149 log.go:172] (0xc0009de370) (0xc000340140) Stream removed, broadcasting: 3\nI0705 14:23:46.227055    2149 log.go:172] (0xc0009de370) (0xc0008b6780) Stream removed, broadcasting: 5\n"
Jul  5 14:23:46.231: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jul  5 14:23:46.231: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jul  5 14:23:46.231: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9381 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jul  5 14:23:46.494: INFO: stderr: "I0705 14:23:46.360793    2169 log.go:172] (0xc00089a4d0) (0xc0007da6e0) Create stream\nI0705 14:23:46.360849    2169 log.go:172] (0xc00089a4d0) (0xc0007da6e0) Stream added, broadcasting: 1\nI0705 14:23:46.365294    2169 log.go:172] (0xc00089a4d0) Reply frame received for 1\nI0705 14:23:46.365336    2169 log.go:172] (0xc00089a4d0) (0xc00003ba40) Create stream\nI0705 14:23:46.365352    2169 log.go:172] (0xc00089a4d0) (0xc00003ba40) Stream added, broadcasting: 3\nI0705 14:23:46.366211    2169 log.go:172] (0xc00089a4d0) Reply frame received for 3\nI0705 14:23:46.366242    2169 log.go:172] (0xc00089a4d0) (0xc0007da000) Create stream\nI0705 14:23:46.366258    2169 log.go:172] (0xc00089a4d0) (0xc0007da000) Stream added, broadcasting: 5\nI0705 14:23:46.367082    2169 log.go:172] (0xc00089a4d0) Reply frame received for 5\nI0705 14:23:46.417288    2169 log.go:172] (0xc00089a4d0) Data frame received for 5\nI0705 14:23:46.417325    2169 log.go:172] (0xc0007da000) (5) Data frame handling\nI0705 14:23:46.417348    2169 log.go:172] (0xc0007da000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0705 14:23:46.487417    2169 log.go:172] (0xc00089a4d0) Data frame received for 3\nI0705 14:23:46.487450    2169 log.go:172] (0xc00003ba40) (3) Data frame handling\nI0705 14:23:46.487468    2169 log.go:172] (0xc00003ba40) (3) Data frame sent\nI0705 14:23:46.487613    2169 log.go:172] (0xc00089a4d0) Data frame received for 5\nI0705 14:23:46.487625    2169 log.go:172] (0xc0007da000) (5) Data frame handling\nI0705 14:23:46.487640    2169 log.go:172] (0xc00089a4d0) Data frame received for 3\nI0705 14:23:46.487644    2169 log.go:172] (0xc00003ba40) (3) Data frame handling\nI0705 14:23:46.489888    2169 log.go:172] (0xc00089a4d0) Data frame received for 1\nI0705 14:23:46.489913    2169 log.go:172] (0xc0007da6e0) (1) Data frame handling\nI0705 14:23:46.489932    2169 log.go:172] (0xc0007da6e0) (1) Data frame sent\nI0705 14:23:46.489952    2169 log.go:172] (0xc00089a4d0) (0xc0007da6e0) Stream removed, broadcasting: 1\nI0705 14:23:46.490096    2169 log.go:172] (0xc00089a4d0) Go away received\nI0705 14:23:46.490281    2169 log.go:172] (0xc00089a4d0) (0xc0007da6e0) Stream removed, broadcasting: 1\nI0705 14:23:46.490304    2169 log.go:172] (0xc00089a4d0) (0xc00003ba40) Stream removed, broadcasting: 3\nI0705 14:23:46.490314    2169 log.go:172] (0xc00089a4d0) (0xc0007da000) Stream removed, broadcasting: 5\n"
Jul  5 14:23:46.495: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jul  5 14:23:46.495: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jul  5 14:23:46.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9381 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jul  5 14:23:46.733: INFO: stderr: "I0705 14:23:46.626783    2191 log.go:172] (0xc00013edc0) (0xc0006e6820) Create stream\nI0705 14:23:46.626846    2191 log.go:172] (0xc00013edc0) (0xc0006e6820) Stream added, broadcasting: 1\nI0705 14:23:46.635163    2191 log.go:172] (0xc00013edc0) Reply frame received for 1\nI0705 14:23:46.635299    2191 log.go:172] (0xc00013edc0) (0xc000654280) Create stream\nI0705 14:23:46.635369    2191 log.go:172] (0xc00013edc0) (0xc000654280) Stream added, broadcasting: 3\nI0705 14:23:46.636934    2191 log.go:172] (0xc00013edc0) Reply frame received for 3\nI0705 14:23:46.636976    2191 log.go:172] (0xc00013edc0) (0xc0006e6000) Create stream\nI0705 14:23:46.636987    2191 log.go:172] (0xc00013edc0) (0xc0006e6000) Stream added, broadcasting: 5\nI0705 14:23:46.638876    2191 log.go:172] (0xc00013edc0) Reply frame received for 5\nI0705 14:23:46.694335    2191 log.go:172] (0xc00013edc0) Data frame received for 5\nI0705 14:23:46.694359    2191 log.go:172] (0xc0006e6000) (5) Data frame handling\nI0705 14:23:46.694375    2191 log.go:172] (0xc0006e6000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0705 14:23:46.725691    2191 log.go:172] (0xc00013edc0) Data frame received for 3\nI0705 14:23:46.725731    2191 log.go:172] (0xc000654280) (3) Data frame handling\nI0705 14:23:46.725766    2191 log.go:172] (0xc000654280) (3) Data frame sent\nI0705 14:23:46.726091    2191 log.go:172] (0xc00013edc0) Data frame received for 5\nI0705 14:23:46.726127    2191 log.go:172] (0xc00013edc0) Data frame received for 3\nI0705 14:23:46.726170    2191 log.go:172] (0xc000654280) (3) Data frame handling\nI0705 14:23:46.726211    2191 log.go:172] (0xc0006e6000) (5) Data frame handling\nI0705 14:23:46.727723    2191 log.go:172] (0xc00013edc0) Data frame received for 1\nI0705 14:23:46.727742    2191 log.go:172] (0xc0006e6820) (1) Data frame handling\nI0705 14:23:46.727751    2191 log.go:172] (0xc0006e6820) (1) Data frame sent\nI0705 14:23:46.727763    2191 log.go:172] (0xc00013edc0) (0xc0006e6820) Stream removed, broadcasting: 1\nI0705 14:23:46.727805    2191 log.go:172] (0xc00013edc0) Go away received\nI0705 14:23:46.728086    2191 log.go:172] (0xc00013edc0) (0xc0006e6820) Stream removed, broadcasting: 1\nI0705 14:23:46.728104    2191 log.go:172] (0xc00013edc0) (0xc000654280) Stream removed, broadcasting: 3\nI0705 14:23:46.728111    2191 log.go:172] (0xc00013edc0) (0xc0006e6000) Stream removed, broadcasting: 5\n"
Jul  5 14:23:46.733: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jul  5 14:23:46.733: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jul  5 14:23:46.733: INFO: Waiting for statefulset status.replicas updated to 0
Jul  5 14:23:46.736: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Jul  5 14:23:56.743: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jul  5 14:23:56.743: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jul  5 14:23:56.743: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jul  5 14:23:56.755: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Jul  5 14:23:56.755: INFO: ss-0  iruya-worker   Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:02 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:02 +0000 UTC  }]
Jul  5 14:23:56.755: INFO: ss-1  iruya-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:25 +0000 UTC  }]
Jul  5 14:23:56.755: INFO: ss-2  iruya-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:25 +0000 UTC  }]
Jul  5 14:23:56.755: INFO: 
Jul  5 14:23:56.755: INFO: StatefulSet ss has not reached scale 0, at 3
Jul  5 14:23:57.761: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Jul  5 14:23:57.761: INFO: ss-0  iruya-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:02 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:02 +0000 UTC  }]
Jul  5 14:23:57.761: INFO: ss-1  iruya-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:25 +0000 UTC  }]
Jul  5 14:23:57.761: INFO: ss-2  iruya-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:25 +0000 UTC  }]
Jul  5 14:23:57.761: INFO: 
Jul  5 14:23:57.761: INFO: StatefulSet ss has not reached scale 0, at 3
Jul  5 14:23:58.784: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Jul  5 14:23:58.784: INFO: ss-0  iruya-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:02 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:02 +0000 UTC  }]
Jul  5 14:23:58.784: INFO: ss-1  iruya-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:25 +0000 UTC  }]
Jul  5 14:23:58.784: INFO: ss-2  iruya-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:25 +0000 UTC  }]
Jul  5 14:23:58.784: INFO: 
Jul  5 14:23:58.784: INFO: StatefulSet ss has not reached scale 0, at 3
Jul  5 14:23:59.788: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Jul  5 14:23:59.788: INFO: ss-0  iruya-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:02 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:02 +0000 UTC  }]
Jul  5 14:23:59.788: INFO: ss-1  iruya-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:25 +0000 UTC  }]
Jul  5 14:23:59.788: INFO: ss-2  iruya-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:25 +0000 UTC  }]
Jul  5 14:23:59.788: INFO: 
Jul  5 14:23:59.788: INFO: StatefulSet ss has not reached scale 0, at 3
Jul  5 14:24:00.792: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Jul  5 14:24:00.792: INFO: ss-0  iruya-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:02 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:02 +0000 UTC  }]
Jul  5 14:24:00.792: INFO: ss-1  iruya-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:25 +0000 UTC  }]
Jul  5 14:24:00.792: INFO: 
Jul  5 14:24:00.792: INFO: StatefulSet ss has not reached scale 0, at 2
Jul  5 14:24:01.797: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Jul  5 14:24:01.798: INFO: ss-0  iruya-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:02 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:02 +0000 UTC  }]
Jul  5 14:24:01.798: INFO: ss-1  iruya-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:25 +0000 UTC  }]
Jul  5 14:24:01.798: INFO: 
Jul  5 14:24:01.798: INFO: StatefulSet ss has not reached scale 0, at 2
Jul  5 14:24:02.802: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Jul  5 14:24:02.802: INFO: ss-0  iruya-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:02 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:02 +0000 UTC  }]
Jul  5 14:24:02.802: INFO: ss-1  iruya-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:25 +0000 UTC  }]
Jul  5 14:24:02.803: INFO: 
Jul  5 14:24:02.803: INFO: StatefulSet ss has not reached scale 0, at 2
Jul  5 14:24:03.808: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Jul  5 14:24:03.808: INFO: ss-0  iruya-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:02 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:02 +0000 UTC  }]
Jul  5 14:24:03.808: INFO: ss-1  iruya-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:25 +0000 UTC  }]
Jul  5 14:24:03.808: INFO: 
Jul  5 14:24:03.808: INFO: StatefulSet ss has not reached scale 0, at 2
Jul  5 14:24:04.813: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Jul  5 14:24:04.813: INFO: ss-0  iruya-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:02 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:02 +0000 UTC  }]
Jul  5 14:24:04.813: INFO: ss-1  iruya-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:25 +0000 UTC  }]
Jul  5 14:24:04.813: INFO: 
Jul  5 14:24:04.813: INFO: StatefulSet ss has not reached scale 0, at 2
Jul  5 14:24:05.818: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Jul  5 14:24:05.818: INFO: ss-0  iruya-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:02 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:02 +0000 UTC  }]
Jul  5 14:24:05.818: INFO: ss-1  iruya-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-05 14:23:25 +0000 UTC  }]
Jul  5 14:24:05.818: INFO: 
Jul  5 14:24:05.818: INFO: StatefulSet ss has not reached scale 0, at 2
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-9381
Jul  5 14:24:06.822: INFO: Scaling statefulset ss to 0
Jul  5 14:24:06.831: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Jul  5 14:24:06.834: INFO: Deleting all statefulset in ns statefulset-9381
Jul  5 14:24:06.836: INFO: Scaling statefulset ss to 0
Jul  5 14:24:06.846: INFO: Waiting for statefulset status.replicas updated to 0
Jul  5 14:24:06.848: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 14:24:06.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-9381" for this suite.
Jul  5 14:24:12.907: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 14:24:12.987: INFO: namespace statefulset-9381 deletion completed in 6.12069041s

• [SLOW TEST:70.945 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 14:24:12.988: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jul  5 14:24:21.191: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul  5 14:24:21.258: INFO: Pod pod-with-prestop-exec-hook still exists
Jul  5 14:24:23.259: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul  5 14:24:23.263: INFO: Pod pod-with-prestop-exec-hook still exists
Jul  5 14:24:25.259: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul  5 14:24:25.263: INFO: Pod pod-with-prestop-exec-hook still exists
Jul  5 14:24:27.259: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul  5 14:24:27.262: INFO: Pod pod-with-prestop-exec-hook still exists
Jul  5 14:24:29.259: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul  5 14:24:29.267: INFO: Pod pod-with-prestop-exec-hook still exists
Jul  5 14:24:31.259: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul  5 14:24:31.263: INFO: Pod pod-with-prestop-exec-hook still exists
Jul  5 14:24:33.259: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul  5 14:24:33.264: INFO: Pod pod-with-prestop-exec-hook still exists
Jul  5 14:24:35.259: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul  5 14:24:35.263: INFO: Pod pod-with-prestop-exec-hook still exists
Jul  5 14:24:37.259: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul  5 14:24:37.263: INFO: Pod pod-with-prestop-exec-hook still exists
Jul  5 14:24:39.259: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul  5 14:24:39.263: INFO: Pod pod-with-prestop-exec-hook still exists
Jul  5 14:24:41.259: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul  5 14:24:41.263: INFO: Pod pod-with-prestop-exec-hook still exists
Jul  5 14:24:43.259: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul  5 14:24:43.262: INFO: Pod pod-with-prestop-exec-hook still exists
Jul  5 14:24:45.259: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul  5 14:24:45.263: INFO: Pod pod-with-prestop-exec-hook still exists
Jul  5 14:24:47.259: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jul  5 14:24:47.271: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 14:24:47.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-148" for this suite.
Jul  5 14:25:09.312: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 14:25:09.399: INFO: namespace container-lifecycle-hook-148 deletion completed in 22.117199104s

• [SLOW TEST:56.411 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 14:25:09.400: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jul  5 14:25:09.506: INFO: Waiting up to 5m0s for pod "pod-c48ad270-cc45-4c8a-9c5c-ee89f7260a34" in namespace "emptydir-1930" to be "success or failure"
Jul  5 14:25:09.527: INFO: Pod "pod-c48ad270-cc45-4c8a-9c5c-ee89f7260a34": Phase="Pending", Reason="", readiness=false. Elapsed: 21.337219ms
Jul  5 14:25:11.531: INFO: Pod "pod-c48ad270-cc45-4c8a-9c5c-ee89f7260a34": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025566315s
Jul  5 14:25:13.536: INFO: Pod "pod-c48ad270-cc45-4c8a-9c5c-ee89f7260a34": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030166484s
STEP: Saw pod success
Jul  5 14:25:13.536: INFO: Pod "pod-c48ad270-cc45-4c8a-9c5c-ee89f7260a34" satisfied condition "success or failure"
Jul  5 14:25:13.539: INFO: Trying to get logs from node iruya-worker pod pod-c48ad270-cc45-4c8a-9c5c-ee89f7260a34 container test-container: 
STEP: delete the pod
Jul  5 14:25:13.586: INFO: Waiting for pod pod-c48ad270-cc45-4c8a-9c5c-ee89f7260a34 to disappear
Jul  5 14:25:13.611: INFO: Pod pod-c48ad270-cc45-4c8a-9c5c-ee89f7260a34 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 14:25:13.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1930" for this suite.
Jul  5 14:25:19.627: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 14:25:19.702: INFO: namespace emptydir-1930 deletion completed in 6.087752146s

• [SLOW TEST:10.302 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 14:25:19.703: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 14:25:19.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4300" for this suite.
Jul  5 14:25:25.867: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 14:25:25.939: INFO: namespace services-4300 deletion completed in 6.101522977s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:6.236 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 14:25:25.940: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Jul  5 14:25:30.591: INFO: Successfully updated pod "labelsupdate031a2390-2d1d-4c3c-80a9-71c36a96f49d"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 14:25:32.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1944" for this suite.
Jul  5 14:25:54.638: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 14:25:54.711: INFO: namespace downward-api-1944 deletion completed in 22.103227417s

• [SLOW TEST:28.771 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 14:25:54.711: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
Jul  5 14:25:55.341: INFO: created pod pod-service-account-defaultsa
Jul  5 14:25:55.341: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Jul  5 14:25:55.350: INFO: created pod pod-service-account-mountsa
Jul  5 14:25:55.350: INFO: pod pod-service-account-mountsa service account token volume mount: true
Jul  5 14:25:55.375: INFO: created pod pod-service-account-nomountsa
Jul  5 14:25:55.375: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Jul  5 14:25:55.391: INFO: created pod pod-service-account-defaultsa-mountspec
Jul  5 14:25:55.391: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Jul  5 14:25:55.419: INFO: created pod pod-service-account-mountsa-mountspec
Jul  5 14:25:55.419: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Jul  5 14:25:55.479: INFO: created pod pod-service-account-nomountsa-mountspec
Jul  5 14:25:55.479: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Jul  5 14:25:55.494: INFO: created pod pod-service-account-defaultsa-nomountspec
Jul  5 14:25:55.494: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Jul  5 14:25:55.534: INFO: created pod pod-service-account-mountsa-nomountspec
Jul  5 14:25:55.534: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Jul  5 14:25:55.566: INFO: created pod pod-service-account-nomountsa-nomountspec
Jul  5 14:25:55.567: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 14:25:55.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-9608" for this suite.
Jul  5 14:26:23.718: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 14:26:23.791: INFO: namespace svcaccounts-9608 deletion completed in 28.171972282s

• [SLOW TEST:29.080 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 14:26:23.792: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service endpoint-test2 in namespace services-5370
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5370 to expose endpoints map[]
Jul  5 14:26:23.931: INFO: successfully validated that service endpoint-test2 in namespace services-5370 exposes endpoints map[] (22.537507ms elapsed)
STEP: Creating pod pod1 in namespace services-5370
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5370 to expose endpoints map[pod1:[80]]
Jul  5 14:26:27.070: INFO: successfully validated that service endpoint-test2 in namespace services-5370 exposes endpoints map[pod1:[80]] (3.124953491s elapsed)
STEP: Creating pod pod2 in namespace services-5370
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5370 to expose endpoints map[pod1:[80] pod2:[80]]
Jul  5 14:26:31.481: INFO: successfully validated that service endpoint-test2 in namespace services-5370 exposes endpoints map[pod1:[80] pod2:[80]] (4.408063322s elapsed)
STEP: Deleting pod pod1 in namespace services-5370
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5370 to expose endpoints map[pod2:[80]]
Jul  5 14:26:32.526: INFO: successfully validated that service endpoint-test2 in namespace services-5370 exposes endpoints map[pod2:[80]] (1.041208784s elapsed)
STEP: Deleting pod pod2 in namespace services-5370
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5370 to expose endpoints map[]
Jul  5 14:26:33.544: INFO: successfully validated that service endpoint-test2 in namespace services-5370 exposes endpoints map[] (1.012990924s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 14:26:33.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5370" for this suite.
Jul  5 14:26:55.671: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 14:26:55.738: INFO: namespace services-5370 deletion completed in 22.076014725s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:31.946 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 14:26:55.739: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-configmap-bmdn
STEP: Creating a pod to test atomic-volume-subpath
Jul  5 14:26:55.808: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-bmdn" in namespace "subpath-6206" to be "success or failure"
Jul  5 14:26:55.828: INFO: Pod "pod-subpath-test-configmap-bmdn": Phase="Pending", Reason="", readiness=false. Elapsed: 20.531412ms
Jul  5 14:26:57.833: INFO: Pod "pod-subpath-test-configmap-bmdn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025564581s
Jul  5 14:26:59.837: INFO: Pod "pod-subpath-test-configmap-bmdn": Phase="Running", Reason="", readiness=true. Elapsed: 4.029593953s
Jul  5 14:27:01.842: INFO: Pod "pod-subpath-test-configmap-bmdn": Phase="Running", Reason="", readiness=true. Elapsed: 6.034315279s
Jul  5 14:27:03.846: INFO: Pod "pod-subpath-test-configmap-bmdn": Phase="Running", Reason="", readiness=true. Elapsed: 8.038794556s
Jul  5 14:27:05.851: INFO: Pod "pod-subpath-test-configmap-bmdn": Phase="Running", Reason="", readiness=true. Elapsed: 10.04296537s
Jul  5 14:27:07.855: INFO: Pod "pod-subpath-test-configmap-bmdn": Phase="Running", Reason="", readiness=true. Elapsed: 12.047607938s
Jul  5 14:27:09.860: INFO: Pod "pod-subpath-test-configmap-bmdn": Phase="Running", Reason="", readiness=true. Elapsed: 14.051937482s
Jul  5 14:27:11.864: INFO: Pod "pod-subpath-test-configmap-bmdn": Phase="Running", Reason="", readiness=true. Elapsed: 16.056554512s
Jul  5 14:27:13.869: INFO: Pod "pod-subpath-test-configmap-bmdn": Phase="Running", Reason="", readiness=true. Elapsed: 18.06144389s
Jul  5 14:27:15.874: INFO: Pod "pod-subpath-test-configmap-bmdn": Phase="Running", Reason="", readiness=true. Elapsed: 20.065979s
Jul  5 14:27:17.877: INFO: Pod "pod-subpath-test-configmap-bmdn": Phase="Running", Reason="", readiness=true. Elapsed: 22.069802587s
Jul  5 14:27:19.883: INFO: Pod "pod-subpath-test-configmap-bmdn": Phase="Running", Reason="", readiness=true. Elapsed: 24.074936178s
Jul  5 14:27:21.887: INFO: Pod "pod-subpath-test-configmap-bmdn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.078989806s
STEP: Saw pod success
Jul  5 14:27:21.887: INFO: Pod "pod-subpath-test-configmap-bmdn" satisfied condition "success or failure"
Jul  5 14:27:21.891: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-configmap-bmdn container test-container-subpath-configmap-bmdn: 
STEP: delete the pod
Jul  5 14:27:21.928: INFO: Waiting for pod pod-subpath-test-configmap-bmdn to disappear
Jul  5 14:27:21.956: INFO: Pod pod-subpath-test-configmap-bmdn no longer exists
STEP: Deleting pod pod-subpath-test-configmap-bmdn
Jul  5 14:27:21.956: INFO: Deleting pod "pod-subpath-test-configmap-bmdn" in namespace "subpath-6206"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 14:27:21.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-6206" for this suite.
Jul  5 14:27:27.978: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 14:27:28.053: INFO: namespace subpath-6206 deletion completed in 6.090457567s

• [SLOW TEST:32.314 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 14:27:28.053: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-58d96ae0-bd60-4b3a-b65b-9af2163dc530
STEP: Creating a pod to test consume configMaps
Jul  5 14:27:28.119: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d45b4427-f4c9-46a7-9198-c97ef4586f0b" in namespace "projected-6485" to be "success or failure"
Jul  5 14:27:28.131: INFO: Pod "pod-projected-configmaps-d45b4427-f4c9-46a7-9198-c97ef4586f0b": Phase="Pending", Reason="", readiness=false. Elapsed: 12.897981ms
Jul  5 14:27:30.135: INFO: Pod "pod-projected-configmaps-d45b4427-f4c9-46a7-9198-c97ef4586f0b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016822017s
Jul  5 14:27:32.140: INFO: Pod "pod-projected-configmaps-d45b4427-f4c9-46a7-9198-c97ef4586f0b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021506851s
STEP: Saw pod success
Jul  5 14:27:32.140: INFO: Pod "pod-projected-configmaps-d45b4427-f4c9-46a7-9198-c97ef4586f0b" satisfied condition "success or failure"
Jul  5 14:27:32.144: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-d45b4427-f4c9-46a7-9198-c97ef4586f0b container projected-configmap-volume-test: 
STEP: delete the pod
Jul  5 14:27:32.181: INFO: Waiting for pod pod-projected-configmaps-d45b4427-f4c9-46a7-9198-c97ef4586f0b to disappear
Jul  5 14:27:32.208: INFO: Pod pod-projected-configmaps-d45b4427-f4c9-46a7-9198-c97ef4586f0b no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 14:27:32.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6485" for this suite.
Jul  5 14:27:38.223: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 14:27:38.302: INFO: namespace projected-6485 deletion completed in 6.090823335s

• [SLOW TEST:10.249 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 14:27:38.303: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jul  5 14:27:38.330: INFO: Creating ReplicaSet my-hostname-basic-e8cbde4f-0552-4ee5-8b2f-bf7d78a2b8f7
Jul  5 14:27:38.369: INFO: Pod name my-hostname-basic-e8cbde4f-0552-4ee5-8b2f-bf7d78a2b8f7: Found 0 pods out of 1
Jul  5 14:27:43.374: INFO: Pod name my-hostname-basic-e8cbde4f-0552-4ee5-8b2f-bf7d78a2b8f7: Found 1 pods out of 1
Jul  5 14:27:43.374: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-e8cbde4f-0552-4ee5-8b2f-bf7d78a2b8f7" is running
Jul  5 14:27:43.377: INFO: Pod "my-hostname-basic-e8cbde4f-0552-4ee5-8b2f-bf7d78a2b8f7-r949m" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-05 14:27:38 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-05 14:27:41 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-05 14:27:41 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-05 14:27:38 +0000 UTC Reason: Message:}])
Jul  5 14:27:43.377: INFO: Trying to dial the pod
Jul  5 14:27:48.390: INFO: Controller my-hostname-basic-e8cbde4f-0552-4ee5-8b2f-bf7d78a2b8f7: Got expected result from replica 1 [my-hostname-basic-e8cbde4f-0552-4ee5-8b2f-bf7d78a2b8f7-r949m]: "my-hostname-basic-e8cbde4f-0552-4ee5-8b2f-bf7d78a2b8f7-r949m", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 14:27:48.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-6302" for this suite.
Jul  5 14:27:54.428: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 14:27:54.537: INFO: namespace replicaset-6302 deletion completed in 6.141551708s

• [SLOW TEST:16.233 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 14:27:54.537: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-f1b162ee-e51c-454c-8ee1-c60e158d16e1
STEP: Creating a pod to test consume configMaps
Jul  5 14:27:54.599: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1503eead-c2ad-462a-a466-f617e36be0fd" in namespace "projected-6588" to be "success or failure"
Jul  5 14:27:54.603: INFO: Pod "pod-projected-configmaps-1503eead-c2ad-462a-a466-f617e36be0fd": Phase="Pending", Reason="", readiness=false. Elapsed: 3.79076ms
Jul  5 14:27:56.623: INFO: Pod "pod-projected-configmaps-1503eead-c2ad-462a-a466-f617e36be0fd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024295718s
Jul  5 14:27:58.627: INFO: Pod "pod-projected-configmaps-1503eead-c2ad-462a-a466-f617e36be0fd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028367291s
STEP: Saw pod success
Jul  5 14:27:58.627: INFO: Pod "pod-projected-configmaps-1503eead-c2ad-462a-a466-f617e36be0fd" satisfied condition "success or failure"
Jul  5 14:27:58.630: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-1503eead-c2ad-462a-a466-f617e36be0fd container projected-configmap-volume-test: 
STEP: delete the pod
Jul  5 14:27:58.785: INFO: Waiting for pod pod-projected-configmaps-1503eead-c2ad-462a-a466-f617e36be0fd to disappear
Jul  5 14:27:58.843: INFO: Pod pod-projected-configmaps-1503eead-c2ad-462a-a466-f617e36be0fd no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 14:27:58.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6588" for this suite.
Jul  5 14:28:04.876: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 14:28:04.963: INFO: namespace projected-6588 deletion completed in 6.115821145s

• [SLOW TEST:10.426 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 14:28:04.965: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override command
Jul  5 14:28:05.024: INFO: Waiting up to 5m0s for pod "client-containers-201ad2db-d9fe-4504-ac9c-3fb6083784a6" in namespace "containers-5051" to be "success or failure"
Jul  5 14:28:05.028: INFO: Pod "client-containers-201ad2db-d9fe-4504-ac9c-3fb6083784a6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.484092ms
Jul  5 14:28:07.031: INFO: Pod "client-containers-201ad2db-d9fe-4504-ac9c-3fb6083784a6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007130632s
Jul  5 14:28:09.035: INFO: Pod "client-containers-201ad2db-d9fe-4504-ac9c-3fb6083784a6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011394652s
STEP: Saw pod success
Jul  5 14:28:09.036: INFO: Pod "client-containers-201ad2db-d9fe-4504-ac9c-3fb6083784a6" satisfied condition "success or failure"
Jul  5 14:28:09.039: INFO: Trying to get logs from node iruya-worker pod client-containers-201ad2db-d9fe-4504-ac9c-3fb6083784a6 container test-container: 
STEP: delete the pod
Jul  5 14:28:09.060: INFO: Waiting for pod client-containers-201ad2db-d9fe-4504-ac9c-3fb6083784a6 to disappear
Jul  5 14:28:09.064: INFO: Pod client-containers-201ad2db-d9fe-4504-ac9c-3fb6083784a6 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 14:28:09.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-5051" for this suite.
Jul  5 14:28:15.106: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 14:28:15.187: INFO: namespace containers-5051 deletion completed in 6.118325948s

• [SLOW TEST:10.222 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 14:28:15.188: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jul  5 14:28:15.250: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-7639'
Jul  5 14:28:15.386: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jul  5 14:28:15.387: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617
Jul  5 14:28:15.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-7639'
Jul  5 14:28:15.548: INFO: stderr: ""
Jul  5 14:28:15.548: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 14:28:15.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7639" for this suite.
Jul  5 14:28:37.589: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 14:28:37.670: INFO: namespace kubectl-7639 deletion completed in 22.095088543s

• [SLOW TEST:22.482 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image when restart is OnFailure  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 14:28:37.670: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jul  5 14:28:45.803: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jul  5 14:28:45.840: INFO: Pod pod-with-poststart-http-hook still exists
Jul  5 14:28:47.841: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jul  5 14:28:47.845: INFO: Pod pod-with-poststart-http-hook still exists
Jul  5 14:28:49.841: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jul  5 14:28:49.845: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 14:28:49.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-8281" for this suite.
Jul  5 14:29:11.865: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 14:29:11.937: INFO: namespace container-lifecycle-hook-8281 deletion completed in 22.087305599s

• [SLOW TEST:34.268 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 14:29:11.938: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jul  5 14:29:12.014: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b317dc7d-c67e-4332-9598-1435aa0130ac" in namespace "downward-api-1931" to be "success or failure"
Jul  5 14:29:12.017: INFO: Pod "downwardapi-volume-b317dc7d-c67e-4332-9598-1435aa0130ac": Phase="Pending", Reason="", readiness=false. Elapsed: 3.660988ms
Jul  5 14:29:14.022: INFO: Pod "downwardapi-volume-b317dc7d-c67e-4332-9598-1435aa0130ac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00825278s
Jul  5 14:29:16.027: INFO: Pod "downwardapi-volume-b317dc7d-c67e-4332-9598-1435aa0130ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012935861s
STEP: Saw pod success
Jul  5 14:29:16.027: INFO: Pod "downwardapi-volume-b317dc7d-c67e-4332-9598-1435aa0130ac" satisfied condition "success or failure"
Jul  5 14:29:16.030: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-b317dc7d-c67e-4332-9598-1435aa0130ac container client-container: 
STEP: delete the pod
Jul  5 14:29:16.085: INFO: Waiting for pod downwardapi-volume-b317dc7d-c67e-4332-9598-1435aa0130ac to disappear
Jul  5 14:29:16.105: INFO: Pod downwardapi-volume-b317dc7d-c67e-4332-9598-1435aa0130ac no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 14:29:16.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1931" for this suite.
Jul  5 14:29:22.118: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 14:29:22.196: INFO: namespace downward-api-1931 deletion completed in 6.087073326s

• [SLOW TEST:10.257 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 14:29:22.196: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Jul  5 14:29:28.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-39c19314-35eb-452b-ac2a-27b8a48d0440 -c busybox-main-container --namespace=emptydir-2097 -- cat /usr/share/volumeshare/shareddata.txt'
Jul  5 14:29:28.566: INFO: stderr: "I0705 14:29:28.454147    2252 log.go:172] (0xc0008da420) (0xc00074e820) Create stream\nI0705 14:29:28.454192    2252 log.go:172] (0xc0008da420) (0xc00074e820) Stream added, broadcasting: 1\nI0705 14:29:28.458348    2252 log.go:172] (0xc0008da420) Reply frame received for 1\nI0705 14:29:28.458403    2252 log.go:172] (0xc0008da420) (0xc0005ba1e0) Create stream\nI0705 14:29:28.458438    2252 log.go:172] (0xc0008da420) (0xc0005ba1e0) Stream added, broadcasting: 3\nI0705 14:29:28.460108    2252 log.go:172] (0xc0008da420) Reply frame received for 3\nI0705 14:29:28.460154    2252 log.go:172] (0xc0008da420) (0xc000894000) Create stream\nI0705 14:29:28.460185    2252 log.go:172] (0xc0008da420) (0xc000894000) Stream added, broadcasting: 5\nI0705 14:29:28.462530    2252 log.go:172] (0xc0008da420) Reply frame received for 5\nI0705 14:29:28.558875    2252 log.go:172] (0xc0008da420) Data frame received for 5\nI0705 14:29:28.558947    2252 log.go:172] (0xc000894000) (5) Data frame handling\nI0705 14:29:28.558987    2252 log.go:172] (0xc0008da420) Data frame received for 3\nI0705 14:29:28.559029    2252 log.go:172] (0xc0005ba1e0) (3) Data frame handling\nI0705 14:29:28.559061    2252 log.go:172] (0xc0005ba1e0) (3) Data frame sent\nI0705 14:29:28.559089    2252 log.go:172] (0xc0008da420) Data frame received for 3\nI0705 14:29:28.559104    2252 log.go:172] (0xc0005ba1e0) (3) Data frame handling\nI0705 14:29:28.560747    2252 log.go:172] (0xc0008da420) Data frame received for 1\nI0705 14:29:28.560783    2252 log.go:172] (0xc00074e820) (1) Data frame handling\nI0705 14:29:28.560800    2252 log.go:172] (0xc00074e820) (1) Data frame sent\nI0705 14:29:28.560815    2252 log.go:172] (0xc0008da420) (0xc00074e820) Stream removed, broadcasting: 1\nI0705 14:29:28.561018    2252 log.go:172] (0xc0008da420) Go away received\nI0705 14:29:28.561332    2252 log.go:172] (0xc0008da420) (0xc00074e820) Stream removed, broadcasting: 1\nI0705 14:29:28.561357    2252 log.go:172] (0xc0008da420) (0xc0005ba1e0) Stream removed, broadcasting: 3\nI0705 14:29:28.561369    2252 log.go:172] (0xc0008da420) (0xc000894000) Stream removed, broadcasting: 5\n"
Jul  5 14:29:28.566: INFO: stdout: "Hello from the busy-box sub-container\n"
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 14:29:28.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2097" for this suite.
Jul  5 14:29:34.585: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 14:29:34.657: INFO: namespace emptydir-2097 deletion completed in 6.08675144s

• [SLOW TEST:12.461 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 14:29:34.658: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Jul  5 14:29:34.719: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jul  5 14:29:34.754: INFO: Waiting for terminating namespaces to be deleted...
Jul  5 14:29:34.756: INFO: 
Logging pods the kubelet thinks is on node iruya-worker before test
Jul  5 14:29:34.761: INFO: kindnet-469kb from kube-system started at 2020-07-04 09:21:50 +0000 UTC (1 container statuses recorded)
Jul  5 14:29:34.761: INFO: 	Container kindnet-cni ready: true, restart count 0
Jul  5 14:29:34.761: INFO: kube-proxy-nxrg9 from kube-system started at 2020-07-04 09:21:50 +0000 UTC (1 container statuses recorded)
Jul  5 14:29:34.761: INFO: 	Container kube-proxy ready: true, restart count 0
Jul  5 14:29:34.761: INFO: 
Logging pods the kubelet thinks is on node iruya-worker2 before test
Jul  5 14:29:34.765: INFO: kube-proxy-wvch7 from kube-system started at 2020-07-04 09:21:50 +0000 UTC (1 container statuses recorded)
Jul  5 14:29:34.765: INFO: 	Container kube-proxy ready: true, restart count 0
Jul  5 14:29:34.765: INFO: kindnet-gj45r from kube-system started at 2020-07-04 09:21:50 +0000 UTC (1 container statuses recorded)
Jul  5 14:29:34.765: INFO: 	Container kindnet-cni ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-7a315278-2da8-487a-a062-7aa42eba16b1 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-7a315278-2da8-487a-a062-7aa42eba16b1 off the node iruya-worker2
STEP: verifying the node doesn't have the label kubernetes.io/e2e-7a315278-2da8-487a-a062-7aa42eba16b1
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 14:29:42.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-1332" for this suite.
Jul  5 14:29:52.921: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 14:29:52.992: INFO: namespace sched-pred-1332 deletion completed in 10.088113056s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:18.334 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 14:29:52.992: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the initial replication controller
Jul  5 14:29:53.103: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9790'
Jul  5 14:29:53.365: INFO: stderr: ""
Jul  5 14:29:53.365: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jul  5 14:29:53.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9790'
Jul  5 14:29:53.485: INFO: stderr: ""
Jul  5 14:29:53.485: INFO: stdout: "update-demo-nautilus-bxc7l update-demo-nautilus-jhj6h "
Jul  5 14:29:53.485: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bxc7l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9790'
Jul  5 14:29:53.574: INFO: stderr: ""
Jul  5 14:29:53.574: INFO: stdout: ""
Jul  5 14:29:53.574: INFO: update-demo-nautilus-bxc7l is created but not running
Jul  5 14:29:58.575: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9790'
Jul  5 14:29:58.679: INFO: stderr: ""
Jul  5 14:29:58.679: INFO: stdout: "update-demo-nautilus-bxc7l update-demo-nautilus-jhj6h "
Jul  5 14:29:58.679: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bxc7l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9790'
Jul  5 14:29:58.784: INFO: stderr: ""
Jul  5 14:29:58.784: INFO: stdout: "true"
Jul  5 14:29:58.784: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bxc7l -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9790'
Jul  5 14:29:58.878: INFO: stderr: ""
Jul  5 14:29:58.878: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul  5 14:29:58.878: INFO: validating pod update-demo-nautilus-bxc7l
Jul  5 14:29:58.883: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul  5 14:29:58.883: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul  5 14:29:58.883: INFO: update-demo-nautilus-bxc7l is verified up and running
Jul  5 14:29:58.883: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jhj6h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9790'
Jul  5 14:29:58.982: INFO: stderr: ""
Jul  5 14:29:58.982: INFO: stdout: "true"
Jul  5 14:29:58.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jhj6h -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9790'
Jul  5 14:29:59.074: INFO: stderr: ""
Jul  5 14:29:59.074: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul  5 14:29:59.074: INFO: validating pod update-demo-nautilus-jhj6h
Jul  5 14:29:59.078: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul  5 14:29:59.078: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul  5 14:29:59.078: INFO: update-demo-nautilus-jhj6h is verified up and running
STEP: rolling-update to new replication controller
Jul  5 14:29:59.080: INFO: scanned /root for discovery docs: 
Jul  5 14:29:59.080: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-9790'
Jul  5 14:30:21.770: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jul  5 14:30:21.770: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jul  5 14:30:21.770: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9790'
Jul  5 14:30:21.855: INFO: stderr: ""
Jul  5 14:30:21.855: INFO: stdout: "update-demo-kitten-b4h7d update-demo-kitten-zpkdg "
Jul  5 14:30:21.855: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-b4h7d -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9790'
Jul  5 14:30:21.947: INFO: stderr: ""
Jul  5 14:30:21.947: INFO: stdout: "true"
Jul  5 14:30:21.947: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-b4h7d -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9790'
Jul  5 14:30:22.033: INFO: stderr: ""
Jul  5 14:30:22.033: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jul  5 14:30:22.033: INFO: validating pod update-demo-kitten-b4h7d
Jul  5 14:30:22.037: INFO: got data: {
  "image": "kitten.jpg"
}

Jul  5 14:30:22.037: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jul  5 14:30:22.037: INFO: update-demo-kitten-b4h7d is verified up and running
Jul  5 14:30:22.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-zpkdg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9790'
Jul  5 14:30:22.129: INFO: stderr: ""
Jul  5 14:30:22.129: INFO: stdout: "true"
Jul  5 14:30:22.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-zpkdg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9790'
Jul  5 14:30:22.226: INFO: stderr: ""
Jul  5 14:30:22.227: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jul  5 14:30:22.227: INFO: validating pod update-demo-kitten-zpkdg
Jul  5 14:30:22.230: INFO: got data: {
  "image": "kitten.jpg"
}

Jul  5 14:30:22.230: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jul  5 14:30:22.230: INFO: update-demo-kitten-zpkdg is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 14:30:22.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9790" for this suite.
Jul  5 14:30:44.264: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 14:30:44.336: INFO: namespace kubectl-9790 deletion completed in 22.102271441s

• [SLOW TEST:51.344 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 14:30:44.336: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179
[It] should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 14:30:44.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5268" for this suite.
Jul  5 14:31:06.543: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 14:31:06.621: INFO: namespace pods-5268 deletion completed in 22.198944334s

• [SLOW TEST:22.285 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 14:31:06.622: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Jul  5 14:31:11.728: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 14:31:12.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-3015" for this suite.
Jul  5 14:31:34.849: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 14:31:34.934: INFO: namespace replicaset-3015 deletion completed in 22.169752275s

• [SLOW TEST:28.312 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 14:31:34.934: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Jul  5 14:31:35.027: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8093'
Jul  5 14:31:35.303: INFO: stderr: ""
Jul  5 14:31:35.303: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jul  5 14:31:35.303: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8093'
Jul  5 14:31:35.419: INFO: stderr: ""
Jul  5 14:31:35.419: INFO: stdout: "update-demo-nautilus-9txm5 update-demo-nautilus-hqr5w "
Jul  5 14:31:35.419: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9txm5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8093'
Jul  5 14:31:35.506: INFO: stderr: ""
Jul  5 14:31:35.506: INFO: stdout: ""
Jul  5 14:31:35.507: INFO: update-demo-nautilus-9txm5 is created but not running
Jul  5 14:31:40.507: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8093'
Jul  5 14:31:40.608: INFO: stderr: ""
Jul  5 14:31:40.608: INFO: stdout: "update-demo-nautilus-9txm5 update-demo-nautilus-hqr5w "
Jul  5 14:31:40.608: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9txm5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8093'
Jul  5 14:31:40.699: INFO: stderr: ""
Jul  5 14:31:40.699: INFO: stdout: "true"
Jul  5 14:31:40.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9txm5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8093'
Jul  5 14:31:40.797: INFO: stderr: ""
Jul  5 14:31:40.797: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul  5 14:31:40.797: INFO: validating pod update-demo-nautilus-9txm5
Jul  5 14:31:40.801: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul  5 14:31:40.801: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul  5 14:31:40.801: INFO: update-demo-nautilus-9txm5 is verified up and running
Jul  5 14:31:40.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hqr5w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8093'
Jul  5 14:31:40.898: INFO: stderr: ""
Jul  5 14:31:40.898: INFO: stdout: "true"
Jul  5 14:31:40.898: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hqr5w -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8093'
Jul  5 14:31:40.990: INFO: stderr: ""
Jul  5 14:31:40.990: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul  5 14:31:40.990: INFO: validating pod update-demo-nautilus-hqr5w
Jul  5 14:31:40.994: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul  5 14:31:40.994: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul  5 14:31:40.994: INFO: update-demo-nautilus-hqr5w is verified up and running
STEP: scaling down the replication controller
Jul  5 14:31:40.997: INFO: scanned /root for discovery docs: 
Jul  5 14:31:40.997: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-8093'
Jul  5 14:31:42.130: INFO: stderr: ""
Jul  5 14:31:42.130: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jul  5 14:31:42.131: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8093'
Jul  5 14:31:42.227: INFO: stderr: ""
Jul  5 14:31:42.227: INFO: stdout: "update-demo-nautilus-9txm5 update-demo-nautilus-hqr5w "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jul  5 14:31:47.227: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8093'
Jul  5 14:31:47.326: INFO: stderr: ""
Jul  5 14:31:47.326: INFO: stdout: "update-demo-nautilus-hqr5w "
Jul  5 14:31:47.326: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hqr5w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8093'
Jul  5 14:31:47.425: INFO: stderr: ""
Jul  5 14:31:47.425: INFO: stdout: "true"
Jul  5 14:31:47.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hqr5w -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8093'
Jul  5 14:31:47.511: INFO: stderr: ""
Jul  5 14:31:47.511: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul  5 14:31:47.511: INFO: validating pod update-demo-nautilus-hqr5w
Jul  5 14:31:47.514: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul  5 14:31:47.514: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul  5 14:31:47.514: INFO: update-demo-nautilus-hqr5w is verified up and running
STEP: scaling up the replication controller
Jul  5 14:31:47.516: INFO: scanned /root for discovery docs: 
Jul  5 14:31:47.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-8093'
Jul  5 14:31:48.665: INFO: stderr: ""
Jul  5 14:31:48.666: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jul  5 14:31:48.666: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8093'
Jul  5 14:31:48.780: INFO: stderr: ""
Jul  5 14:31:48.781: INFO: stdout: "update-demo-nautilus-hqr5w update-demo-nautilus-tpwfs "
Jul  5 14:31:48.781: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hqr5w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8093'
Jul  5 14:31:48.868: INFO: stderr: ""
Jul  5 14:31:48.868: INFO: stdout: "true"
Jul  5 14:31:48.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hqr5w -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8093'
Jul  5 14:31:48.956: INFO: stderr: ""
Jul  5 14:31:48.956: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul  5 14:31:48.956: INFO: validating pod update-demo-nautilus-hqr5w
Jul  5 14:31:48.959: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul  5 14:31:48.959: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul  5 14:31:48.959: INFO: update-demo-nautilus-hqr5w is verified up and running
Jul  5 14:31:48.960: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tpwfs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8093'
Jul  5 14:31:49.058: INFO: stderr: ""
Jul  5 14:31:49.058: INFO: stdout: ""
Jul  5 14:31:49.058: INFO: update-demo-nautilus-tpwfs is created but not running
Jul  5 14:31:54.058: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8093'
Jul  5 14:31:54.163: INFO: stderr: ""
Jul  5 14:31:54.163: INFO: stdout: "update-demo-nautilus-hqr5w update-demo-nautilus-tpwfs "
Jul  5 14:31:54.163: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hqr5w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8093'
Jul  5 14:31:54.250: INFO: stderr: ""
Jul  5 14:31:54.250: INFO: stdout: "true"
Jul  5 14:31:54.251: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hqr5w -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8093'
Jul  5 14:31:54.348: INFO: stderr: ""
Jul  5 14:31:54.348: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul  5 14:31:54.348: INFO: validating pod update-demo-nautilus-hqr5w
Jul  5 14:31:54.351: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul  5 14:31:54.352: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul  5 14:31:54.352: INFO: update-demo-nautilus-hqr5w is verified up and running
Jul  5 14:31:54.352: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tpwfs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8093'
Jul  5 14:31:54.433: INFO: stderr: ""
Jul  5 14:31:54.433: INFO: stdout: "true"
Jul  5 14:31:54.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tpwfs -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8093'
Jul  5 14:31:54.541: INFO: stderr: ""
Jul  5 14:31:54.541: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul  5 14:31:54.541: INFO: validating pod update-demo-nautilus-tpwfs
Jul  5 14:31:54.546: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul  5 14:31:54.546: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul  5 14:31:54.546: INFO: update-demo-nautilus-tpwfs is verified up and running
STEP: using delete to clean up resources
Jul  5 14:31:54.546: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8093'
Jul  5 14:31:54.642: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul  5 14:31:54.642: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jul  5 14:31:54.642: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8093'
Jul  5 14:31:54.776: INFO: stderr: "No resources found.\n"
Jul  5 14:31:54.776: INFO: stdout: ""
Jul  5 14:31:54.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8093 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jul  5 14:31:54.909: INFO: stderr: ""
Jul  5 14:31:54.909: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 14:31:54.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8093" for this suite.
Jul  5 14:32:16.929: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 14:32:17.003: INFO: namespace kubectl-8093 deletion completed in 22.090187908s

• [SLOW TEST:42.069 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jul  5 14:32:17.004: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jul  5 14:32:17.082: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-529'
Jul  5 14:32:17.189: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jul  5 14:32:17.189: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562
Jul  5 14:32:19.285: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-529'
Jul  5 14:32:19.432: INFO: stderr: ""
Jul  5 14:32:19.432: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jul  5 14:32:19.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-529" for this suite.
Jul  5 14:32:41.476: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jul  5 14:32:41.540: INFO: namespace kubectl-529 deletion completed in 22.10420026s

• [SLOW TEST:24.537 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSJul  5 14:32:41.542: INFO: Running AfterSuite actions on all nodes
Jul  5 14:32:41.542: INFO: Running AfterSuite actions on node 1
Jul  5 14:32:41.542: INFO: Skipping dumping logs from cluster

Ran 215 of 4413 Specs in 5810.681 seconds
SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4198 Skipped
PASS